mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-23 21:20:42 +08:00
超期回收,过期回收
2019 年文章清理
This commit is contained in:
parent
9dff5fc8ef
commit
d45031f660
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-kde-edition/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition
|
||||
======
|
||||
|
||||
Debian-based [MX Linux][1] is already an impressive Linux distribution with [Xfce desktop environment][2] as the default. Even though it works good and is suitable to run with minimal hardware configuration, it still isn’t the best Linux distribution in terms of eye candy.
|
||||
|
||||
That’s where KDE comes to the rescue. Of late, KDE Plasma has reduced a lot of weight and it uses fewer system resources without compromising on the modern looks. No wonder KDE Plasma is one of [the best desktop environments][3] out there.
|
||||
|
||||
![][4]
|
||||
|
||||
With [MX Linux 19.2][5], they began testing a KDE edition and have finally released their first KDE version.
|
||||
|
||||
Also, the KDE edition comes with Advanced Hardware Support (AHS) enabled. Here’s what they have mentioned in their release notes:
|
||||
|
||||
> MX-19.2 KDE is an **Advanced Hardware Support (AHS) **enabled **64-bit only** version of MX featuring the KDE/plasma desktop. Applications utilizing Qt library frameworks are given a preference for inclusion on the iso.
|
||||
|
||||
As I mentioned it earlier, this is MX Linux’s first KDE edition ever, and they’ve also shed some light on it with the announcement as well:
|
||||
|
||||
> This will be first officially supported MX/antiX family iso utilizing the KDE/plasma desktop since the halting of the predecessor MEPIS project in 2013.
|
||||
|
||||
Personally, I enjoyed the experience of using MX Linux until I started using [Pop OS 20.04][6]. So, I’ll give you some key highlights of MX Linux 19.2 KDE edition along with my impressions of testing it.
|
||||
|
||||
### MX Linux 19.2 KDE: Overview
|
||||
|
||||
![][7]
|
||||
|
||||
Out of the box, MX Linux looks cleaner and more attractive with KDE desktop on board. Unlike KDE Neon, it doesn’t feature the latest and greatest KDE stuff, but it looks to be doing the job intended.
|
||||
|
||||
Of course, you will get the same options that you expect from a KDE-powered distro to customize the look and feel of your desktop. In addition to the obvious KDE perks, you will also get the usual MX tools, antiX-live-usb-system, and snapshot feature that comes baked in the Xfce edition.
|
||||
|
||||
It’s a great thing to have the best of both worlds here, as stated in their announcement:
|
||||
|
||||
> MX-19.2 KDE includes the usual MX tools, antiX-live-usb-system, and snapshot technology that our users have come to expect from our standard flagship Xfce releases. Adding KDE/plasma to the existing Xfce/MX-fluxbox desktops will provide for a wider range user needs and wants.
|
||||
|
||||
I haven’t performed a great deal of tests but I did have some issues with extracting archives (it didn’t work the first try) and copy-pasting a file to a new location. Not sure if those are some known bugs — but I thought I should let you know here.
|
||||
|
||||
![][8]
|
||||
|
||||
Other than that, it features every useful tool you’d want to have and works great. With KDE on board, it actually feels more polished and smooth in my case.
|
||||
|
||||
Along with KDE Plasma 5.14.5 on top of Debian 10 “buster”, it also comes with GIMP 2.10.12, MESA, Debian (AHS) 5.6 Kernel, Firefox browser, and few other goodies like VLC, Thunderbird, LibreOffice, and Clementine music player.
|
||||
|
||||
You can also look for more stuff in the MX repositories.
|
||||
|
||||
![][9]
|
||||
|
||||
There are some known issues with the release like the System clock settings not being able adjustable via KDE settings. You can check their [announcement post][10] for more information or their [bug list][11] to make sure everything’s fine before trying it out on your production system.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
MX Linux 19.2 KDE edition is definitely more impressive than its Xfce offering in my opinion. It would take a while to iron out the bugs for this first KDE release — but it’s not a bad start.
|
||||
|
||||
Speaking of KDE, I recently tested out KDE Neon, the official KDE distribution. I shared my experience in this video. I’ll try to do a video on MX Linux KDE flavor as well.
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][12]
|
||||
|
||||
Have you tried it yet? Let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-kde-edition/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mxlinux.org/
|
||||
[2]: https://www.xfce.org/
|
||||
[3]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-kde-edition.jpg?resize=800%2C450&ssl=1
|
||||
[5]: https://mxlinux.org/blog/mx-19-2-now-available/
|
||||
[6]: https://itsfoss.com/pop-os-20-04-review/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde.jpg?resize=800%2C452&ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-filemanager.jpg?resize=800%2C452&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-info.jpg?resize=800%2C452&ssl=1
|
||||
[10]: https://mxlinux.org/blog/mx-19-2-kde-now-available/
|
||||
[11]: https://bugs.mxlinux.org/
|
||||
[12]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
@ -1,66 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco open-source code boosts performance of Kubernetes apps over SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco open-source code boosts performance of Kubernetes apps over SD-WAN
|
||||
======
|
||||
Cisco's Cloud-Native SD-WAN project marries SD-WANs to Kubernetes applications to cut down on the manual work needed to optimize latency and packet loss.
|
||||
Thinkstock
|
||||
|
||||
Cisco has introduced an open-source project that it says could go a long way toward reducing the manual work involved in optimizing performance of Kubernetes-applications across [SD-WANs][1].
|
||||
|
||||
Cisco said it launched the Cloud-Native SD-WAN (CN-WAN) project to show how Kubernetes applications can be automatically mapped to SD-WAN with the result that the applications perform better over the WAN.
|
||||
|
||||
**More about SD-WAN**: [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2] • [How to pick an off-site data-backup method][3] • [SD-Branch: What it is and why you’ll need it][4] • [What are the options for security SD-WAN?][5]
|
||||
|
||||
“In many cases, enterprises deploy an SD-WAN to connect a Kubernetes cluster with users or workloads that consume cloud-native applications. In a typical enterprise, NetOps teams leverage their network expertise to program SD-WAN policies to optimize general connectivity to the Kubernetes hosted applications, with the goal to reduce latency, reduce packet loss, etc.” wrote John Apostolopoulos, vice president and CTO of Cisco’s intent-based networking group in a group [blog][6].
|
||||
|
||||
“The enterprise usually also has DevOps teams that maintain and optimize the Kubernetes infrastructure. However, despite the efforts of NetOps and DevOps teams, today Kubernetes and SD-WAN operate mostly like ships in the night, often unaware of each other. Integration between SD-WAN and Kubernetes typically involves time-consuming manual coordination between the two teams.”
|
||||
|
||||
Current SD-WAN offering often have APIs that let customers programmatically influence how their traffic is handled over the WAN. This enables interesting and valuable opportunities for automation and application optimization, Apostolopoulos stated. “We believe there is an opportunity to pair the declarative nature of Kubernetes with the programmable nature of modern SD-WAN solutions,” he stated.
|
||||
|
||||
Enter CN-WAN, which defines a set of components that can be used to integrate an SD-WAN package, such as Cisco Viptela SD-WAN, with Kubernetes to enable DevOps teams to express the WAN needs of the microservices they deploy in a Kubernetes cluster, while simultaneously letting NetOps automatically render the microservices needs to optimize the application performance over the WAN, Apostolopoulos stated.
|
||||
|
||||
Apostolopoulos wrote that CN-WAN is composed of a Kubernetes Operator, a Reader, and an Adaptor. It works like this: The CN-WAN Operator runs in the Kubernetes cluster, actively monitoring the deployed services. DevOps teams can use standard Kubernetes annotations on the services to define WAN-specific metadata, such as the traffic profile of the application. The CN-WAN Operator then automatically registers the service along with the metadata in a service registry. In a demo at KubeCon EU this week Cisco used Google Service Directory as the service registry.
|
||||
|
||||
Earlier this year [Cisco and Google][7] deepened their relationship with a turnkey package that lets customers mesh SD-WAN connectivity with applications running in a private [data center][8], Google Cloud or another cloud or SaaS application. That jointly developed platform, called Cisco SD-WAN Cloud Hub with Google Cloud, combines Cisco’s SD-WAN policy-, telemetry- and security-setting capabilities with Google's software-defined backbone to ensure that application service-level agreement, security and compliance policies are extended across the network.
|
||||
|
||||
Meanwhile, on the SD-WAN side, the CN-WAN Reader connects to the service registry to learn about how Kubernetes is exposing the services and the associated WAN metadata extracted by the CN-WAN operator, Cisco stated. When new or updated services or metadata are detected, the CN-WAN Reader sends a message towards the CN-WAN Adaptor so SD-WAN policies can be updated.
|
||||
|
||||
Finally, the CN-WAN Adaptor, maps the service-associated metadata into the detailed SD-WAN policies programmed by NetOps in the SD-WAN controller. The SD-WAN controller automatically renders the SD-WAN policies, specified by the NetOps for each metadata type, into specific SD-WAN data-plane optimizations for the service, Cisco stated.
|
||||
|
||||
“The SD-WAN may support multiple types of access at both sender and receiver (e.g., wired Internet, MPLS, wireless 4G or 5G), as well as multiple service options and prioritizations per access network, and of course multiple paths between source and destination,” Apostolopoulos stated.
|
||||
|
||||
The code for the CN-WAN project is available as open-source in [GitHub][9].
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html
|
||||
[6]: https://blogs.cisco.com/networking/introducing-the-cloud-native-sd-wan-project
|
||||
[7]: https://www.networkworld.com/article/3539252/cisco-integrates-sd-wan-connectivity-with-google-cloud.html
|
||||
[8]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[9]: https://github.com/CloudNativeSDWAN/cnwan-docs
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -1,122 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (KDE Plasma 5.20 is Here With Exciting Improvements)
|
||||
[#]: via: (https://itsfoss.com/kde-plasma-5-20/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
KDE Plasma 5.20 is Here With Exciting Improvements
|
||||
======
|
||||
|
||||
KDE Plasma 5.20 is finally here and there’s a lot of things to be excited about, including the new wallpaper ‘**Shell’** by Lucas Andrade.
|
||||
|
||||
It is worth noting that is not an LTS release unlike [KDE Plasma 5.18][1] and will be maintained for the next 4 months or so. So, if you want the latest and greatest, you can surely go ahead and give it a try.
|
||||
|
||||
In this article, I shall mention the key highlights of KDE Plasma 5.20 from [my experience with it on KDE Neon][2] (Testing Edition).
|
||||
|
||||
![][3]
|
||||
|
||||
### Plasma 5.20 Features
|
||||
|
||||
If you like to see things in action, we made a feature overview video for you.
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][4]
|
||||
|
||||
#### Icon-only Taskbar
|
||||
|
||||
![][5]
|
||||
|
||||
You must be already comfortable with a taskbar that mentions the title of the window along the icon. However, that takes a lot of space in the taskbar, which looks bad when you want to have a clean look with multiple applications/windows opened.
|
||||
|
||||
Not just limited to that, if you launch several windows of the same application, it will group them together and let you cycle through it from a single icon on the task bar.
|
||||
|
||||
So, with this update, you get an icon-only taskbar by default which makes it look a lot cleaner and you can have more things in the taskbar at a glance.
|
||||
|
||||
#### Digital Clock Applet with Date
|
||||
|
||||
![][6]
|
||||
|
||||
If you’ve used any KDE-powered distro, you must have noticed that the digital clock applet (in the bottom-right corner) displays the time but not the date by default.
|
||||
|
||||
It’s always a good choice to have the date and time as well (at least I prefer that). So, with KDE Plasma 5.20, the applet will have both time and date.
|
||||
|
||||
#### Get Notified When your System almost Runs out of Space
|
||||
|
||||
I know this is not a big addition, but a necessary one. No matter whether your home directory is on a different partition, you will be notified when you’re about to run out of space.
|
||||
|
||||
#### Set the Charge Limit Below 100%
|
||||
|
||||
You are in for a treat if you are a laptop user. To help you preserve the battery health, you can now set a charge limit below 100%. I couldn’t show it to you because I use a desktop.
|
||||
|
||||
#### Workspace Improvements
|
||||
|
||||
Working with the workspaces on KDE desktop was already an impressive experience, now with the latest update, several tweaks have been made to take the user experience up a notch.
|
||||
|
||||
To start with, the system tray has been overhauled with a grid-like layout replacing the list view.
|
||||
|
||||
The default shortcut has been re-assigned with Meta+drag instead of Alt+drag to move/re-size windows to avoid conflicts with some other productivity apps with Alt+drag keybind support. You can also use the key binds like Meta + up/left/down arrow to corner-tile windows.
|
||||
|
||||
![][7]
|
||||
|
||||
It is also easier to list all the disks using the old “**Device Notifier**” applet, which has been renamed to “**Disks & Devices**“.
|
||||
|
||||
If that wasn’t enough, you will also find improvements to [KRunner][8], which is the essential application launcher or search utility for users. It will now remember the search text history and you can also have it centered on the screen instead of having it on top of the screen.
|
||||
|
||||
#### System Settings Improvements
|
||||
|
||||
The look and feel of the system setting is the same but it is more useful now. You will notice a new “**Highlight changed settings**” option which will show you the recent/modified changes when compared to the default values.
|
||||
|
||||
So, in that way, you can monitor any changes that you did accidentally or if someone else did it.
|
||||
|
||||
![][9]
|
||||
|
||||
In addition to that, you also get to utilize S.M.A.R.T monitoring and disk failure notifications.
|
||||
|
||||
#### Wayland Support Improvements
|
||||
|
||||
If you prefer to use a Wayland session, you will be happy to know that it now supports [Klipper][10] and you can also middle-click to paste (on KDE apps only for the time being).
|
||||
|
||||
The much-needed screencasting support has also been added.
|
||||
|
||||
#### Other Improvements
|
||||
|
||||
Of course, you will notice some subtle visual improvements or adjustments for the look and feel. You may notice a smooth transition effect when changing the brightness. Similarly, when changing the brightness or volume, the on-screen display that pops up is now less obtrusive
|
||||
|
||||
Options like controlling the scroll speed of mouse/touchpad have been added to give you finer controls.
|
||||
|
||||
You can find the detailed list of changes in its [official changelog][11], if you’re curious.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
The changes are definitely impressive and should make the KDE experience better than ever before.
|
||||
|
||||
If you’re running KDE Neon, you should get the update soon. But, if you are on Kubuntu, you will have to try the 20.10 ISO to get your hands on Plasma 5.20.
|
||||
|
||||
What do you like the most among the list of changes? Have you tried it yet? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/kde-plasma-5-20/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/kde-plasma-5-18-release/
|
||||
[2]: https://itsfoss.com/kde-neon-review/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-feat.png?resize=800%2C394&ssl=1
|
||||
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-taskbar.jpg?resize=472%2C290&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-clock.jpg?resize=372%2C224&ssl=1
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-notify.jpg?resize=800%2C692&ssl=1
|
||||
[8]: https://docs.kde.org/trunk5/en/kde-workspace/plasma-desktop/krunner.html
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/plasma-disks-smart.png?resize=800%2C539&ssl=1
|
||||
[10]: https://userbase.kde.org/Klipper
|
||||
[11]: https://kde.org/announcements/plasma-5.20.0
|
@ -1,95 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What metrics matter: A guide for open source projects)
|
||||
[#]: via: (https://opensource.com/article/19/1/metrics-guide-open-source-projects)
|
||||
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
|
||||
|
||||
What metrics matter: A guide for open source projects
|
||||
======
|
||||
5 principles for deciding what to measure.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-)
|
||||
|
||||
"Without data, you're just a person with an opinion."
|
||||
|
||||
Those are the words of W. Edwards Deming, the champion of statistical process control, who was credited as one of the inspirations for what became known as the Japanese post-war economic miracle of 1950 to 1960. Ironically, Japanese manufacturers like Toyota were far more receptive to Deming’s ideas than General Motors and Ford were.
|
||||
|
||||
Community management is certainly an art. It’s about mentoring. It’s about having difficult conversations with people who are hurting the community. It’s about negotiation and compromise. It’s about interacting with other communities. It’s about making connections. In the words of Red Hat’s Diane Mueller, it’s about "nurturing conversations."
|
||||
|
||||
However, it’s also about metrics and data.
|
||||
|
||||
Some have much in common with software development projects more broadly. Others are more specific to the management of the community itself. I think of deciding what to measure and how as adhering to five principles.
|
||||
|
||||
### 1. Recognize that behaviors aren't independent of the measurements you choose to highlight.
|
||||
|
||||
In 2008, Daniel Ariely published Predictably Irrational, one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: “Human beings adjust behavior based on the metrics they’re held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you’ll get. Period.”
|
||||
|
||||
This shouldn’t be surprising. It’s a finding that’s been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It’s certainly not news to anyone in sales management, for example. Base sales reps’ (or their managers’) bonuses solely on revenue, and they’ll try to discount whatever it takes to maximize revenue, even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the spiffs? Probably not happening.
|
||||
|
||||
And lest you think I’m unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in a 2010 Harvard Business Review article: “CEOs care about stock value because that’s how we measure them. If we want to change what they care about, we should change what we measure.”
|
||||
|
||||
Developers and other community members are not immune.
|
||||
|
||||
### 2. You need to choose relevant metrics.
|
||||
|
||||
There’s a lot of folk wisdom floating around about what’s relevant and important that’s not necessarily true. My colleague [Dave Neary offers an example from baseball][1]: “In the late '90s, the key measurements that were used to measure batter skill were RBI (runs batted in) and batting average (how often a player got on base with a hit, divided by the number of at-bats). The Oakland A’s were the first major league team to recruit based on a different measurement of player performance: on-base percentage. This measures how often they get to first base, regardless of how it happens.”
|
||||
|
||||
Indeed, the whole revolution of sabermetrics in baseball and elsewhere, which was popularized in Michael Lewis’ Moneyball, often gets talked about in terms of introducing data in a field that historically was more about gut feel and personal experience. But it was also about taking a game that had actually always been fairly numbers-obsessed and coming up with new metrics based on mostly existing data to better measure player value. (The data revolution going on in sports today is more about collecting much more data through video and other means than was previously available.)
|
||||
|
||||
### 3. Quantity may not lead to quality.
|
||||
|
||||
As a corollary, collecting lots of tangential but easy-to-capture data isn’t better than just selecting a few measurements you’ve determined are genuinely useful. In a world where online behavior can be tracked with great granularity and displayed in colorful dashboards, it’s tempting to be distracted by sheer data volume, even when it doesn’t deliver any great insight into community health and trajectory.
|
||||
|
||||
This may seem like an obvious point: Why measure something that isn’t relevant? In practice, metrics often get chosen because they’re easy to measure, not because they’re particularly useful. They tend to be more about inputs than outputs: The number of developers. The number of forum posts. The number of commits. Collectively, measures like this often get called vanity metrics. They’re ubiquitous, but most people involved with community management don’t think much of them.
|
||||
|
||||
Number of downloads may be the worst of the bunch. It’s true that, at some level, they’re an indication of interest in a project. That’s something. But it’s sufficiently distant from actively using the project, much less engaging with the project deeply, that it’s hard to view downloads as a very useful number.
|
||||
|
||||
Is there any harm in these vanity metrics? Yes, to the degree that you start thinking that they’re something to base action on. Probably more seriously, stakeholders like company management or industry observers can come to see them as meaningful indicators of project health.
|
||||
|
||||
### 4. Understand what measurements really mean and how they relate to each other.
|
||||
|
||||
Neary makes this point to caution against myopia. “In one project I worked on,” he says, ”some people were concerned about a recent spike in the number of bug reports coming in because it seemed like the project must have serious quality issues to resolve. However, when we looked at the numbers, it turned out that many of the bugs were coming in because a large company had recently started using the project. The increase in bug reports was actually a proxy for a big influx of new users, which was a good thing.”
|
||||
|
||||
In practice, you often have to measure through proxies. This isn’t an inherent problem, but the further you get between what you want to measure and what you’re actually measuring, the harder it is to connect the dots. It’s fine to track progress in closing bugs, writing code, and adding new features. However, those don’t necessarily correlate with how happy users are or whether the project is doing a good job of working towards its long-term objectives, whatever those may be.
|
||||
|
||||
### 5. Different measurements serve different purposes.
|
||||
|
||||
Some measurements may be non-obvious but useful for tracking the success of a project and community relative to internal goals. Others may be better suited for a press release or other external consumption. For example, as a community manager, you may really care about the number of meetups, mentoring sessions, and virtual briefings your community has held over the past three months. But it’s the number of contributions and contributors that are more likely to grab the headlines. You probably care about those too. But maybe not as much, depending upon your current priorities.
|
||||
|
||||
Still, other measurements may relate to the goals of any sponsoring organizations. The measurements most relevant for projects tied to commercial products are likely to be different from pure community efforts.
|
||||
|
||||
Because communities differ and goals differ, it’s not possible to simply compile a metrics checklist, but here are some ideas to think about:
|
||||
|
||||
Consider qualitative metrics in addition to quantitative ones. Conducting surveys and other studies can be time-consuming, especially if they’re rigorous enough to yield better-than-anecdotal data. It also requires rigor to construct studies so that they can be used to track changes over time. In other words, it’s a lot easier to measure quantitative contributor activity than it is to suss out if the community members are happier about their participation today than they were a year ago. However, given the importance of culture to the health of a community, measuring it in a systematic way can be a worthwhile exercise.
|
||||
|
||||
Breadth of community, including how many are unaffiliated with commercial entities, is important for many projects. The greater the breadth, the greater the potential leverage of the open source development process. It can also be instructive to see how companies and individuals are contributing. Projects can be explicitly designed to better accommodate casual contributors.
|
||||
|
||||
Are new contributors able to have an impact, or are they ignored? How long does it take for code contributions to get committed? How long does it take for a reported bug to be fixed or otherwise responded to? If they asked a question in a forum, did anyone answer them? In other words, are you letting contributors contribute?
|
||||
|
||||
Advancement within the project is also an important metric. [Mikeal Rogers of the Node.js community][2] explains: “The shift that we made was to create a support system and an education system to take a user and turn them into a contributor, first at a very low level, and educate them to bring them into the committer pool and eventually into the maintainer pool. The end result of this is that we have a wide range of skill sets. Rather than trying to attract phenomenal developers, we’re creating new phenomenal developers.”
|
||||
|
||||
Whatever metrics you choose, don’t forget why you made them metrics in the first place. I find a helpful question to ask is: “What am I going to do with this number?” If the answer is to just put it in a report or in a press release, that’s not a great answer. Metrics should be measurements that tell you either that you’re on the right path or that you need to take specific actions to course-correct.
|
||||
|
||||
For this reason, Stormy Peters, who handles community leads at Red Hat, [argues for keeping it simple][3]. She writes, “It’s much better to have one or two key metrics than to worry about all the possible metrics. You can capture all the possible metrics, but as a project, you should focus on moving one. It’s also better to have a simple metric that correlates directly to something in the real world than a metric that is a complicated formula or ration between multiple things. As project members make decisions, you want them to be able to intuitively feel whether or not it will affect the project’s key metric in the right direction.”
|
||||
|
||||
The article is adapted from [How Open Source Ate Software][4] by Gordon Haff (Apress 2018).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/metrics-guide-open-source-projects
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ghaff
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://community.redhat.com/blog/2014/07/when-metrics-go-wrong/
|
||||
[2]: https://opensource.com/article/17/3/nodejs-community-casual-contributors
|
||||
[3]: https://medium.com/open-source-communities/3-important-things-to-consider-when-measuring-your-success-50e21ad82858
|
||||
[4]: https://www.apress.com/us/book/9781484238936
|
@ -1,68 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What happens when a veteran teacher goes to an open source conference)
|
||||
[#]: via: (https://opensource.com/open-organization/19/1/educator-at-open-source-conference)
|
||||
[#]: author: (Ben Owens https://opensource.com/users/engineerteacher)
|
||||
|
||||
What happens when a veteran teacher goes to an open source conference
|
||||
======
|
||||
Sometimes feeling like a fish out of water is precisely what educators need.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesgen_rh_032x_0.png?itok=cApG9aB4)
|
||||
|
||||
"Change is going to be continual, and today is the slowest day society will ever move."—[Tony Fadell][1]
|
||||
|
||||
If ever there was an experience that brought the above quotation home for me, it was my experience at the [All Things Open conference][2] in Raleigh, NC last October. Thousands of people from all over the world attended the conference, and many (if not most), worked as open source coders and developers. As one of the relatively few educators in attendance, I saw and heard things that were completely foreign to me—terms like as Istio, Stack Overflow, Ubuntu, Sidecar, HyperLedger, and Kubernetes tossed around for days.
|
||||
|
||||
I felt like a fish out of water. But in the end, that was the perfect dose of reality I needed to truly understand how open principles can reshape our approach to education.
|
||||
|
||||
### Not-so-strange attractors
|
||||
|
||||
All Things Open attracted me to Raleigh for two reasons, both of which have to do with how our schools must do a better job of creating environments that truly prepare students for a rapidly changing world.
|
||||
|
||||
The first is my belief that schools should embrace the ideals of the [open source way][3]. The second is that educators have to periodically force themselves out of their relatively isolated worlds of "doing school" in order to get a glimpse of what the world is actually doing.
|
||||
|
||||
When I was an engineer for 20 years, I developed a deep sense of the power of an open exchange of ideas, of collaboration, and of the need for rapid prototyping of innovations. Although we didn't call these ideas "open source" at the time, my colleagues and I constantly worked together to identify and solve problems using tools such as [Design Thinking][4] so that our businesses remained competitive and met market demands. When I became a science and math teacher at a small [public school][5] in rural Appalachia, my goal was to adapt these ideas to my classrooms and to the school at large as a way to blur the lines between a traditional school environment and what routinely happens in the "real world."
|
||||
|
||||
Through several years of hard work and many iterations, my fellow teachers and I were eventually able to develop a comprehensive, school-wide project-based learning model, where students worked in collaborative teams on projects that [made real connections][6] between required curriculum and community-based applications. Doing so gave these students the ability to develop skills they can use for a lifetime, rather than just on the next test—skills such as problem solving, critical thinking, oral and written communication, perseverance through setbacks, and adapting to changing conditions, as well as how to have routine conversations with adult mentors form the community. Only after reading [The Open Organization][7] did I realize that what we had been doing essentially embodied what Jim Whitehurst had described. In our case, of course, we applied open principles to an educational context (that model, called Open Way Learning, is the subject of a [book][8] published in December).
|
||||
|
||||
I felt like a fish out of water. But in the end, that was the perfect dose of reality I needed to truly understand how open principles can reshape our approach to education.
|
||||
|
||||
As good as this model is in terms of pushing students into a relevant, engaging, and often unpredictable learning environments, it can only go so far if we, as educators who facilitate this type of project-based learning, do not constantly stay abreast of changing technologies and their respective lexicon. Even this unconventional but proven approach will still leave students ill-prepared for a global, innovation economy if we aren't constantly pushing ourselves into areas outside our own comfort zones. My experience at the All Things Open conference was a perfect example. While humbling, it also forced me to confront what I didn't know so that I can learn from it to help the work I do with other teachers and schools.
|
||||
|
||||
### A critical decision
|
||||
|
||||
I made this point to others when I shared a picture of the All Things Open job board with dozens of colleagues all over the country. I shared it with the caption: "What did you do in your school today to prepare your students for this reality tomorrow?" The honest answer from many was, unfortunately, "not much." That has to change.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/open-org/owens_1.jpg)
|
||||
![](https://opensource.com/sites/default/files/images/open-org/owens_2.jpg)
|
||||
(Images courtesy of Ben Owens, CC BY-SA)
|
||||
|
||||
People in organizations everywhere have to make a critical decision: either embrace the rapid pace of change that is a fact of life in our world or face the hard reality of irrelevance. Our systems in education are at this same crossroads—even ones who think of themselves as being innovative. It involves admitting to students, "I don't know, but I'm willing to learn." That's the kind of teaching and learning experience our students deserve.
|
||||
|
||||
It can happen, but it will take pioneering educators who are willing to move away from comfortable, back-of-the-book answers to help students as they work on difficult and messy challenges. You may very well be a veritable fish out of water.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/1/educator-at-open-source-conference
|
||||
|
||||
作者:[Ben Owens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/engineerteacher
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Tony_Fadell
|
||||
[2]: https://allthingsopen.org/
|
||||
[3]: https://opensource.com/open-source-way
|
||||
[4]: https://dschool.stanford.edu/resources-collections/a-virtual-crash-course-in-design-thinking
|
||||
[5]: https://www.tricountyearlycollege.org/
|
||||
[6]: https://www.bie.org/about/what_pbl
|
||||
[7]: https://www.redhat.com/en/explore/the-open-organization-book
|
||||
[8]: https://www.amazon.com/Open-Up-Education-Learning-Transform/dp/1475842007/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr=
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 confusing open source license scenarios and how to navigate them)
|
||||
[#]: via: (https://opensource.com/article/19/1/open-source-license-scenarios)
|
||||
[#]: author: (P.Kevin Nelson https://opensource.com/users/pkn4645)
|
||||
|
||||
4 confusing open source license scenarios and how to navigate them
|
||||
======
|
||||
|
||||
Before you begin using a piece of software, make sure you fully understand the terms of its license.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_openisopen.png?itok=FjmDxIaL)
|
||||
|
||||
As an attorney running an open source program office for a Fortune 500 corporation, I am often asked to look into a product or component where there seems to be confusion as to the licensing model. Under what terms can the code be used, and what obligations run with such use? This often happens when the code or the associated project community does not clearly indicate availability under a [commonly accepted open source license][1]. The confusion is understandable as copyright owners often evolve their products and services in different directions in response to market demands. Here are some of the scenarios I commonly discover and how you can approach each situation.
|
||||
|
||||
### Multiple licenses
|
||||
|
||||
The product is truly open source with an [Open Source Initiative][2] (OSI) open source-approved license, but has changed licensing models at least once if not multiple times throughout its lifespan. This scenario is fairly easy to address; the user simply has to decide if the latest version with its attendant features and bug fixes is worth the conditions to be compliant with the current license. If so, great. If not, then the user can move back in time to a version released under a more palatable license and start from that fork, understanding that there may not be an active community for support and continued development.
|
||||
|
||||
### Old open source
|
||||
|
||||
This is a variation on the multiple licenses model with the twist that current licensing is proprietary only. You have to use an older version to take advantage of open source terms and conditions. Most often, the product was released under a valid open source license up to a certain point in its development, but then the copyright holder chose to evolve the code in a proprietary fashion and offer new releases only under proprietary commercial licensing terms. So, if you want the newest capabilities, you have to purchase a proprietary license, and you most likely will not get a copy of the underlying source code. Most often the open source community that grew up around the original code line falls away once the members understand there will be no further commitment from the copyright holder to the open source branch. While this scenario is understandable from the copyright holder's perspective, it can be seen as "burning a bridge" to the open source community. It would be very difficult to again leverage the benefits of the open source contribution models once a project owner follows this path.
|
||||
|
||||
### Open core
|
||||
|
||||
By far the most common discovery is that a product has both an open source-licensed "community edition" and a proprietary-licensed commercial offering, commonly referred to as open core. This is often encouraging to potential consumers, as it gives them a "try before you buy" option or even a chance to influence both versions of the product by becoming an active member of the community. I usually encourage clients to begin with the community version, get involved, and see what they can achieve. Then, if the product becomes a crucial part of their business plan, they have the option to upgrade to the proprietary level at any time.
|
||||
|
||||
### Freemium
|
||||
|
||||
The component is not open source at all, but instead it is released under some version of the "freemium" model. A version with restricted or time-limited functionality can be downloaded with no immediate purchase required. However, since the source code is usually not provided and its accompanying license does not allow perpetual use, the creation of derivative works, nor further distribution, it is definitely not open source. In this scenario, it is usually best to pass unless you are prepared to purchase a proprietary license and accept all attendant terms and conditions of use. Users are often the most disappointed in this outcome as it has somewhat of a deceptive feel.
|
||||
|
||||
### OSI compliant
|
||||
|
||||
Of course, the happy path I haven't mentioned is to discover the project has a single, clear, OSI-compliant license. In those situations, open source software is as easy as downloading and going forward within appropriate use.
|
||||
|
||||
Each of the more complex scenarios described above can present problems to potential development projects, but consultation with skilled procurement or intellectual property professionals with regard to licensing lineage can reveal excellent opportunities.
|
||||
|
||||
An earlier version of this article was published on [OSS Law][3] and is republished with the author's permission.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/open-source-license-scenarios
|
||||
|
||||
作者:[P.Kevin Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pkn4645
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.org/licenses
|
||||
[2]: https://opensource.org/licenses/category
|
||||
[3]: http://www.pknlaw.com/2017/06/i-thought-that-was-open-source.html
|
@ -1,203 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OOP Before OOP with Simula)
|
||||
[#]: via: (https://twobithistory.org/2019/01/31/simula.html)
|
||||
[#]: author: (Sinclair Target https://twobithistory.org)
|
||||
|
||||
OOP Before OOP with Simula
|
||||
======
|
||||
|
||||
Imagine that you are sitting on the grassy bank of a river. Ahead of you, the water flows past swiftly. The afternoon sun has put you in an idle, philosophical mood, and you begin to wonder whether the river in front of you really exists at all. Sure, large volumes of water are going by only a few feet away. But what is this thing that you are calling a “river”? After all, the water you see is here and then gone, to be replaced only by more and different water. It doesn’t seem like the word “river” refers to any fixed thing in front of you at all.
|
||||
|
||||
In 2009, Rich Hickey, the creator of Clojure, gave [an excellent talk][1] about why this philosophical quandary poses a problem for the object-oriented programming paradigm. He argues that we think of an object in a computer program the same way we think of a river—we imagine that the object has a fixed identity, even though many or all of the object’s properties will change over time. Doing this is a mistake, because we have no way of distinguishing between an object instance in one state and the same object instance in another state. We have no explicit notion of time in our programs. We just breezily use the same name everywhere and hope that the object is in the state we expect it to be in when we reference it. Inevitably, we write bugs.
|
||||
|
||||
The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data. We should think of each object as a “river” of causally related states. In sum, you should use a functional language like Clojure.
|
||||
|
||||
![][2]
|
||||
The author, on a hike, pondering the ontological commitments
|
||||
of object-oriented programming.
|
||||
|
||||
Since Hickey gave his talk in 2009, interest in functional programming languages has grown, and functional programming idioms have found their way into the most popular object-oriented languages. Even so, most programmers continue to instantiate objects and mutate them in place every day. And they have been doing it for so long that it is hard to imagine that programming could ever look different.
|
||||
|
||||
I wanted to write an article about Simula and imagined that it would mostly be about when and how object-oriented constructs we are familiar with today were added to the language. But I think the more interesting story is about how Simula was originally so unlike modern object-oriented programming languages. This shouldn’t be a surprise, because the object-oriented paradigm we know now did not spring into existence fully formed. There were two major versions of Simula: Simula I and Simula 67. Simula 67 brought the world classes, class hierarchies, and virtual methods. But Simula I was a first draft that experimented with other ideas about how data and procedures could be bundled together. The Simula I model is not a functional model like the one Hickey proposes, but it does focus on processes that unfold over time rather than objects with hidden state that interact with each other. Had Simula 67 stuck with more of Simula I’s ideas, the object-oriented paradigm we know today might have looked very different indeed—and that contingency should teach us to be wary of assuming that the current paradigm will dominate forever.
|
||||
|
||||
### Simula 0 Through 67
|
||||
|
||||
Simula was created by two Norwegians, Kristen Nygaard and Ole-Johan Dahl.
|
||||
|
||||
In the late 1950s, Nygaard was employed by the Norwegian Defense Research Establishment (NDRE), a research institute affiliated with the Norwegian military. While there, he developed Monte Carlo simulations used for nuclear reactor design and operations research. These simulations were at first done by hand and then eventually programmed and run on a Ferranti Mercury. Nygaard soon found that he wanted a higher-level way to describe these simulations to a computer.
|
||||
|
||||
The kind of simulation that Nygaard commonly developed is known as a “discrete event model.” The simulation captures how a sequence of events change the state of a system over time—but the important property here is that the simulation can jump from one event to the next, since the events are discrete and nothing changes in the system between events. This kind of modeling, according to a paper that Nygaard and Dahl presented about Simula in 1966, was increasingly being used to analyze “nerve networks, communication systems, traffic flow, production systems, administrative systems, social systems, etc.” So Nygaard thought that other people might want a higher-level way to describe these simulations too. He began looking for someone that could help him implement what he called his “Simulation Language” or “Monte Carlo Compiler.”
|
||||
|
||||
Dahl, who had also been employed by NDRE, where he had worked on language design, came aboard at this point to play Wozniak to Nygaard’s Jobs. Over the next year or so, Nygaard and Dahl worked to develop what has been called “Simula 0.” This early version of the language was going to be merely a modest extension to ALGOL 60, and the plan was to implement it as a preprocessor. The language was then much less abstract than what came later. The primary language constructs were “stations” and “customers.” These could be used to model certain discrete event networks; Nygaard and Dahl give an example simulating airport departures. But Nygaard and Dahl eventually came up with a more general language construct that could represent both “stations” and “customers” and also model a wider range of simulations. This was the first of two major generalizations that took Simula from being an application-specific ALGOL package to a general-purpose programming language.
|
||||
|
||||
In Simula I, there were no “stations” or “customers,” but these could be recreated using “processes.” A process was a bundle of data attributes associated with a single action known as the process’ operating rule. You might think of a process as an object with only a single method, called something like `run()`. This analogy is imperfect though, because each process’ operating rule could be suspended or resumed at any time—the operating rules were a kind of coroutine. A Simula I program would model a system as a set of processes that conceptually all ran in parallel. Only one process could actually be “current” at any time, but once a process suspended itself the next queued process would automatically take over. As the simulation ran, behind the scenes, Simula would keep a timeline of “event notices” that tracked when each process should be resumed. In order to resume a suspended process, Simula needed to keep track of multiple call stacks. This meant that Simula could no longer be an ALGOL preprocessor, because ALGOL had only once call stack. Nygaard and Dahl were committed to writing their own compiler.
|
||||
|
||||
In their paper introducing this system, Nygaard and Dahl illustrate its use by implementing a simulation of a factory with a limited number of machines that can serve orders. The process here is the order, which starts by looking for an available machine, suspends itself to wait for one if none are available, and then runs to completion once a free machine is found. There is a definition of the order process that is then used to instantiate several different order instances, but no methods are ever called on these instances. The main part of the program just creates the processes and sets them running.
|
||||
|
||||
The first Simula I compiler was finished in 1965. The language grew popular at the Norwegian Computer Center, where Nygaard and Dahl had gone to work after leaving NDRE. Implementations of Simula I were made available to UNIVAC users and to Burroughs B5500 users. Nygaard and Dahl did a consulting deal with a Swedish company called ASEA that involved using Simula to run job shop simulations. But Nygaard and Dahl soon realized that Simula could be used to write programs that had nothing to do with simulation at all.
|
||||
|
||||
Stein Krogdahl, a professor at the University of Oslo that has written about the history of Simula, claims that “the spark that really made the development of a new general-purpose language take off” was [a paper called “Record Handling”][3] by the British computer scientist C.A.R. Hoare. If you read Hoare’s paper now, this is easy to believe. I’m surprised that you don’t hear Hoare’s name more often when people talk about the history of object-oriented languages. Consider this excerpt from his paper:
|
||||
|
||||
> The proposal envisages the existence inside the computer during the execution of the program, of an arbitrary number of records, each of which represents some object which is of past, present or future interest to the programmer. The program keeps dynamic control of the number of records in existence, and can create new records or destroy existing ones in accordance with the requirements of the task in hand.
|
||||
|
||||
> Each record in the computer must belong to one of a limited number of disjoint record classes; the programmer may declare as many record classes as he requires, and he associates with each class an identifier to name it. A record class name may be thought of as a common generic term like “cow,” “table,” or “house” and the records which belong to these classes represent the individual cows, tables, and houses.
|
||||
|
||||
Hoare does not mention subclasses in this particular paper, but Dahl credits him with introducing Nygaard and himself to the concept. Nygaard and Dahl had noticed that processes in Simula I often had common elements. Using a superclass to implement those common elements would be convenient. This also raised the possibility that the “process” idea itself could be implemented as a superclass, meaning that not every class had to be a process with a single operating rule. This then was the second great generalization that would make Simula 67 a truly general-purpose programming language. It was such a shift of focus that Nygaard and Dahl briefly considered changing the name of the language so that people would know it was not just for simulations. But “Simula” was too much of an established name for them to risk it.
|
||||
|
||||
In 1967, Nygaard and Dahl signed a contract with Control Data to implement this new version of Simula, to be known as Simula 67. A conference was held in June, where people from Control Data, the University of Oslo, and the Norwegian Computing Center met with Nygaard and Dahl to establish a specification for this new language. This conference eventually led to a document called the [“Simula 67 Common Base Language,”][4] which defined the language going forward.
|
||||
|
||||
Several different vendors would make Simula 67 compilers. The Association of Simula Users (ASU) was founded and began holding annual conferences. Simula 67 soon had users in more than 23 different countries.
|
||||
|
||||
### 21st Century Simula
|
||||
|
||||
Simula is remembered now because of its influence on the languages that have supplanted it. You would be hard-pressed to find anyone still using Simula to write application programs. But that doesn’t mean that Simula is an entirely dead language. You can still compile and run Simula programs on your computer today, thanks to [GNU cim][5].
|
||||
|
||||
The cim compiler implements the Simula standard as it was after a revision in 1986. But this is mostly the Simula 67 version of the language. You can write classes, subclass, and virtual methods just as you would have with Simula 67. So you could create a small object-oriented program that looks a lot like something you could easily write in Python or Ruby:
|
||||
|
||||
```
|
||||
! dogs.sim ;
|
||||
Begin
|
||||
Class Dog;
|
||||
! The cim compiler requires virtual procedures to be fully specified ;
|
||||
Virtual: Procedure bark Is Procedure bark;;
|
||||
Begin
|
||||
Procedure bark;
|
||||
Begin
|
||||
OutText("Woof!");
|
||||
OutImage; ! Outputs a newline ;
|
||||
End;
|
||||
End;
|
||||
|
||||
Dog Class Chihuahua; ! Chihuahua is "prefixed" by Dog ;
|
||||
Begin
|
||||
Procedure bark;
|
||||
Begin
|
||||
OutText("Yap yap yap yap yap yap");
|
||||
OutImage;
|
||||
End;
|
||||
End;
|
||||
|
||||
Ref (Dog) d;
|
||||
d :- new Chihuahua; ! :- is the reference assignment operator ;
|
||||
d.bark;
|
||||
End;
|
||||
```
|
||||
|
||||
You would compile and run it as follows:
|
||||
|
||||
```
|
||||
$ cim dogs.sim
|
||||
Compiling dogs.sim:
|
||||
gcc -g -O2 -c dogs.c
|
||||
gcc -g -O2 -o dogs dogs.o -L/usr/local/lib -lcim
|
||||
$ ./dogs
|
||||
Yap yap yap yap yap yap
|
||||
```
|
||||
|
||||
(You might notice that cim compiles Simula to C, then hands off to a C compiler.)
|
||||
|
||||
This was what object-oriented programming looked like in 1967, and I hope you agree that aside from syntactic differences this is also what object-oriented programming looks like in 2019. So you can see why Simula is considered a historically important language.
|
||||
|
||||
But I’m more interested in showing you the process model that was central to Simula I. That process model is still available in Simula 67, but only when you use the `Process` class and a special `Simulation` block.
|
||||
|
||||
In order to show you how processes work, I’ve decided to simulate the following scenario. Imagine that there is a village full of villagers next to a river. The river has lots of fish, but between them the villagers only have one fishing rod. The villagers, who have voracious appetites, get hungry every 60 minutes or so. When they get hungry, they have to use the fishing rod to catch a fish. If a villager cannot use the fishing rod because another villager is waiting for it, then the villager queues up to use the fishing rod. If a villager has to wait more than five minutes to catch a fish, then the villager loses health. If a villager loses too much health, then that villager has starved to death.
|
||||
|
||||
This is a somewhat strange example and I’m not sure why this is what first came to mind. But there you go. We will represent our villagers as Simula processes and see what happens over a day’s worth of simulated time in a village with four villagers.
|
||||
|
||||
The full program is [available here as a Gist][6].
|
||||
|
||||
The last lines of my output look like the following. Here we are seeing what happens in the last few hours of the day:
|
||||
|
||||
```
|
||||
1299.45: John is hungry and requests the fishing rod.
|
||||
1299.45: John is now fishing.
|
||||
1311.39: John has caught a fish.
|
||||
1328.96: Betty is hungry and requests the fishing rod.
|
||||
1328.96: Betty is now fishing.
|
||||
1331.25: Jane is hungry and requests the fishing rod.
|
||||
1340.44: Betty has caught a fish.
|
||||
1340.44: Jane went hungry waiting for the rod.
|
||||
1340.44: Jane starved to death waiting for the rod.
|
||||
1369.21: John is hungry and requests the fishing rod.
|
||||
1369.21: John is now fishing.
|
||||
1379.33: John has caught a fish.
|
||||
1409.59: Betty is hungry and requests the fishing rod.
|
||||
1409.59: Betty is now fishing.
|
||||
1419.98: Betty has caught a fish.
|
||||
1427.53: John is hungry and requests the fishing rod.
|
||||
1427.53: John is now fishing.
|
||||
1437.52: John has caught a fish.
|
||||
```
|
||||
|
||||
Poor Jane starved to death. But she lasted longer than Sam, who didn’t even make it to 7am. Betty and John sure have it good now that only two of them need the fishing rod.
|
||||
|
||||
What I want you to see here is that the main, top-level part of the program does nothing but create the four villager processes and get them going. The processes manipulate the fishing rod object in the same way that we would manipulate an object today. But the main part of the program does not call any methods or modify and properties on the processes. The processes have internal state, but this internal state only gets modified by the process itself.
|
||||
|
||||
There are still fields that get mutated in place here, so this style of programming does not directly address the problems that pure functional programming would solve. But as Krogdahl observes, “this mechanism invites the programmer of a simulation to model the underlying system as a set of processes, each describing some natural sequence of events in that system.” Rather than thinking primarily in terms of nouns or actors—objects that do things to other objects—here we are thinking of ongoing processes. The benefit is that we can hand overall control of our program off to Simula’s event notice system, which Krogdahl calls a “time manager.” So even though we are still mutating processes in place, no process makes any assumptions about the state of another process. Each process interacts with other processes only indirectly.
|
||||
|
||||
It’s not obvious how this pattern could be used to build, say, a compiler or an HTTP server. (On the other hand, if you’ve ever programmed games in the Unity game engine, this should look familiar.) I also admit that even though we have a “time manager” now, this may not have been exactly what Hickey meant when he said that we need an explicit notion of time in our programs. (I think he’d want something like the superscript notation [that Ada Lovelace used][7] to distinguish between the different values a variable assumes through time.) All the same, I think it’s really interesting that right there at the beginning of object-oriented programming we can find a style of programming that is not all like the object-oriented programming we are used to. We might take it for granted that object-oriented programming simply works one way—that a program is just a long list of the things that certain objects do to other objects in the exact order that they do them. Simula I’s process system shows that there are other approaches. Functional languages are probably a better thought-out alternative, but Simula I reminds us that the very notion of alternatives to modern object-oriented programming should come as no surprise.
|
||||
|
||||
If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][8] on Twitter or subscribe to the [RSS feed][9] to make sure you know when a new post is out.
|
||||
|
||||
Previously on TwoBitHistory…
|
||||
|
||||
> Hey everyone! I sadly haven't had time to do any new writing but I've just put up an updated version of my history of RSS. This version incorporates interviews I've since done with some of the key people behind RSS like Ramanathan Guha and Dan Libby.<https://t.co/WYPhvpTGqB>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [December 18, 2018][10]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
1. Jan Rune Holmevik, “The History of Simula,” accessed January 31, 2019, http://campus.hesge.ch/daehne/2004-2005/langages/simula.htm. ↩
|
||||
|
||||
2. Ole-Johan Dahl and Kristen Nygaard, “SIMULA—An ALGOL-Based Simulation Langauge,” Communications of the ACM 9, no. 9 (September 1966): 671, accessed January 31, 2019, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.384&rep=rep1&type=pdf. ↩
|
||||
|
||||
3. Stein Krogdahl, “The Birth of Simula,” 2, accessed January 31, 2019, http://heim.ifi.uio.no/~steinkr/papers/HiNC1-webversion-simula.pdf. ↩
|
||||
|
||||
4. ibid. ↩
|
||||
|
||||
5. Ole-Johan Dahl and Kristen Nygaard, “The Development of the Simula Languages,” ACM SIGPLAN Notices 13, no. 8 (August 1978): 248, accessed January 31, 2019, https://hannemyr.com/cache/knojd_acm78.pdf. ↩
|
||||
|
||||
6. Dahl and Nygaard (1966), 676. ↩
|
||||
|
||||
7. Dahl and Nygaard (1978), 257. ↩
|
||||
|
||||
8. Krogdahl, 3. ↩
|
||||
|
||||
9. Ole-Johan Dahl, “The Birth of Object-Orientation: The Simula Languages,” 3, accessed January 31, 2019, http://www.olejohandahl.info/old/birth-of-oo.pdf. ↩
|
||||
|
||||
10. Dahl and Nygaard (1978), 265. ↩
|
||||
|
||||
11. Holmevik. ↩
|
||||
|
||||
12. Krogdahl, 4. ↩
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2019/01/31/simula.html
|
||||
|
||||
作者:[Sinclair Target][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
|
||||
[2]: /images/river.jpg
|
||||
[3]: https://archive.computerhistory.org/resources/text/algol/ACM_Algol_bulletin/1061032/p39-hoare.pdf
|
||||
[4]: http://web.eah-jena.de/~kleine/history/languages/Simula-CommonBaseLanguage.pdf
|
||||
[5]: https://www.gnu.org/software/cim/
|
||||
[6]: https://gist.github.com/sinclairtarget/6364cd521010d28ee24dd41ab3d61a96
|
||||
[7]: https://twobithistory.org/2018/08/18/ada-lovelace-note-g.html
|
||||
[8]: https://twitter.com/TwoBitHistory
|
||||
[9]: https://twobithistory.org/feed.xml
|
||||
[10]: https://twitter.com/TwoBitHistory/status/1075075139543449600?ref_src=twsrc%5Etfw
|
@ -1,118 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Config management is dead: Long live Config Management Camp)
|
||||
[#]: via: (https://opensource.com/article/19/2/configuration-management-camp)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
|
||||
|
||||
Config management is dead: Long live Config Management Camp
|
||||
======
|
||||
|
||||
CfgMgmtCamp '19 co-organizers share their take on ops, DevOps, observability, and the rise of YoloOps and YAML engineers.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc)
|
||||
|
||||
Everyone goes to [FOSDEM][1] in Brussels to learn from its massive collection of talk tracks, colloquially known as developer rooms, that run the gauntlet of curiosities, covering programming languages like Rust, Go, and Python, to special topics ranging from community, to legal, to privacy. After two days of nonstop activity, many FOSDEM attendees move on to Ghent, Belgium, to join hundreds for Configuration Management Camp ([CfgMgmtCamp][2]).
|
||||
|
||||
Kris Buytaert and Toshaan Bharvani run the popular post-FOSDEM show centered around infrastructure management, featuring hackerspaces, training, workshops, and keynotes. It's a deeply technical exploration of the who, what, and how of building resilient infrastructure. It started in 2013 as a PuppetCamp but expanded to include more communities and tools in 2014.
|
||||
|
||||
I spoke with Kris and Toshaan, who both have a healthy sense of humor, about CfgMgmtCamp's past, present, and future. Our interview has been edited for length and clarity.
|
||||
|
||||
**Matthew: Your opening[keynote][3] is called "CfgMgmtCamp is dead." Is config management dead? Will it live on, or will something take its place?**
|
||||
|
||||
**Kris:** We've noticed people are jumping on the hype of containers, trying to solve the same problems in a different way. But they are still managing config, only in different ways and with other tools. Over the past couple of years, we've evolved from a conference with a focus on infrastructure-as-code tooling, such as Puppet, Chef, CFEngine, Ansible, Juju, and Salt, to a more open source infrastructure automation conference in general. So, config management is definitely not dead. Infrastructure-as-code is also not dead, but it all is evolving.
|
||||
|
||||
**Toshaan:** We see people changing tools, jumping on hype, and communities changing; however, the basic ideas and concepts remain the same.
|
||||
|
||||
**Matthew: It's great to see[observability as the topic][4] of one of your keynotes. Why should those who care about configuration management also care about monitoring and observability?**
|
||||
|
||||
**Kris:** While the name of the conference hasn't changed, the tools have evolved and we have expanded our horizon. Ten years ago, [Devopsdays][5] was just #devopsdays, but it evolved to focus on culture—the C of [CAMS][6] in the DevOps' core principles of Culture, Automation, Measurement, and Sharing.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cams.png)
|
||||
|
||||
[Monitorama][7] filled the gap on monitoring and metrics (tackling the M in CAMS). Config Management Camp is about open source Automation, the A. Since they are all open source conferences, they fulfill the Sharing part, completing the CAMS concept.
|
||||
|
||||
Observability sits on the line between Automation and Measurement. To go one step further, in some of my talks about open source monitoring, I describe the evolution of monitoring tools from #monitoringsucks to #monitoringlove; for lots of people (including me), the love for monitoring returned because we tied it to automation. We started to provision a service and automatically adapted the monitoring of that service to its state. Gone were the days where the monitoring tool was out of sync with reality.
|
||||
|
||||
Looking at it from the other side, when you have an infrastructure or application so complex that you need observability in it, you'd better not be deploying manually; you will need some form of automation at that level of complexity. So, observability and infrastructure automation are tied together.
|
||||
|
||||
**Toshaan:** Yes, while in the past we focused on configuration management, we will be looking to expand that into all types of infrastructure management. Last year, we played with this idea, and we were able to have a lot of cross-tool presentations. This year, we've taken this a step further by having more differentiated content.
|
||||
|
||||
**Matthew: Some of my virtualization and Linux admin friends push back, saying observability is a developer's responsibility. How would you respond without just saying "DevOps?"**
|
||||
|
||||
**Kris:** What you describe is what I call "Ooops Devs." This is a trend where the people who run the platform don't really care what they run; as long as port 80 is listening and the node pings, they are happy. It's equally bad as "Dev Ooops." "Ooops Devs" is where the devs rant about the ops folks because they are slow, not agile, and not responsive. But, to me, your job as an ops person or as a Linux admin is to keep a service running, and the only way to do that is to take on that task is as a team—with your colleagues who have different roles and insights, people who write code, people who design, etc. It is a shared responsibility. And hiding behind "that is someone else's responsibility," doesn't smell like collaboration going on.
|
||||
|
||||
**Toshaan:** Even in the dark ages of silos, I believe a true sysadmin should have cared about observability, monitoring, and automation. I believe that the DevOps movement has made this much more widespread, and that it has become easier to get this information and expose it. On the other hand, I believe that pure operators or sysadmins have learned to be team players (or, they may have died out). I like the analogy of an army unit composed of different specialty soldiers who work together to complete a mission; we have engineers who work to deliver products or services.
|
||||
|
||||
**Matthew: In a[Devopsdays Zurich talk][8], Kris offered an opinion that Americans build software for acquisition and Europeans build for resilience. In that light, what are the best skills for someone who wants to build meaningful infrastructure?**
|
||||
|
||||
**Toshaan:** I believe still some people don't understand the complexity of code sprawl, and they believe that some new hype will solve this magically.
|
||||
|
||||
**Kris:** This year, we invited [Steve Traugott][9], co-author of the 1998 USENIX paper "[Bootstrapping an Infrastructure][10]" that helped kickstart our community. So many people never read [Infrastructures.org][11], never experienced the pain of building images and image sprawl, and don't understand the evolution we went through that led us to build things the way we build them from source code.
|
||||
|
||||
People should study topics such as idempotence, resilience, reproducibility, and surviving the tenth floor test. (As explained in "Bootstrapping an Infrastructure": "The test we used when designing infrastructures was 'Can I grab a random machine and throw it out the tenth-floor window without adversely impacting users for more than 10 minutes?' If the answer to this was 'yes,' then we knew we were doing things right.") But only after they understand the service they are building—the service is the absolute priority—can they begin working on things like: how can we run this, how can we make sure it keeps running, how can it fail and how can we prevent that, and if it disappears, how can we spin it up again fast, unnoticed by the end user.
|
||||
|
||||
**Toshaan:** 100% uptime.
|
||||
|
||||
**Kris:** The challenge we have is that lots of people don't have that experience yet. We've seen the rise of [YoloOps][12]—just spin it up once, fire, and forget—which results in security problems, stability problems, data loss, etc., and they often grasp onto the solutions in YoloOps, the easy way to do something quickly and move on. But understanding how things will eventually fail takes time, it's called experience.
|
||||
|
||||
**Toshaan:** Well, when I was a student and manned the CentOS stand at FOSDEM, I remember a guy coming up to the stand and complaining that he couldn't do consulting because of the "fire once and forgot" policy of CentOS, and that it just worked too well. I like to call this ZombieOps, but YoloOps works also.
|
||||
|
||||
**Matthew: I see you're leading the second year of YamlCamp as well. Why does a markup language need its own camp?**
|
||||
|
||||
**Kris:** [YamlCamp][13] is a parody, it's a joke. Last year, Bob Walker ([@rjw1][14]) gave a talk titled "Are we all YAML engineers now?" that led to more jokes. We've had a discussion for years about rebranding CfgMgmtCamp; the problem is that people know our name, we have a large enough audience to keep going, and changing the name would mean effort spent on logos, website, DNS, etc. We won't change the name, but we joked that we could rebrand to YamlCamp, because for some weird reason, a lot of the talks are about YAML. :)
|
||||
|
||||
**Matthew: Do you think systems engineers should list YAML as a skill or a language on their CV? Should companies be hiring YAML engineers, or do you have "Long live all YAML engineers" on the website in jest?**
|
||||
|
||||
**Toshaan:** Well, the real question is whether people are willing to call themselves YAML engineers proudly, because we already have enough DevOps engineers.
|
||||
|
||||
**Matthew: What FOSS software helps you manage the event?**
|
||||
|
||||
**Toshaan:** I re-did the website in Hugo CMS because we were spending too much time maintaining the website manually. I chose Hugo, because I was learning Golang, and because it has been successfully used for other conferences and my own website. I also wanted a static website and iCalendar output, so we could use calendar tooling such as Giggity to have a good scheduling tool.
|
||||
|
||||
The website now builds quite nicely, and while I still have some ideas on improvements, maintenance is now much easier.
|
||||
|
||||
For the call for proposals (CFP), we now use [OpenCFP][15]. We want to optimize the submission, voting, selection, and extraction to be as automated as possible, while being easy and comfortable for potential speakers, reviewers, and ourselves to use. OpenCFP seems to be the tool that works; while we still have some feature requirements, I believe that, once we have some time to contribute back to OpenCFP, we'll have a fully functional and easy tool to run CFPs with.
|
||||
|
||||
Last, we switched from EventBrite to Pretix because I wanted to be GDPR compliant and have the ability to run our questions, vouchers, and extra features. Pretix allows us to control registration of attendees, speakers, sponsors, and organizers and have a single overview of all the people coming to the event.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The beauty of Configuration Management Camp to me is that it continues to evolve with its audience. Configuration management is certainly at the heart of the work, but it's in service to resilient infrastructure. Keep your eyes open for the talk recordings to learn from the [line up of incredible speakers][16], and thank you to the team for running this (free) show!
|
||||
|
||||
You can follow Kris [@KrisBuytaert][17] and Toshaan [@toshywoshy][18]. You can also see Kris' past articles [on his blog][19].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/configuration-management-camp
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fosdem.org/2019/
|
||||
[2]: https://cfgmgmtcamp.eu/
|
||||
[3]: https://cfgmgmtcamp.eu/schedule/monday/intro00/
|
||||
[4]: https://cfgmgmtcamp.eu/schedule/monday/keynote0/
|
||||
[5]: https://www.devopsdays.org/
|
||||
[6]: http://devopsdictionary.com/wiki/CAMS
|
||||
[7]: http://monitorama.com/
|
||||
[8]: https://vimeo.com/272519813
|
||||
[9]: https://cfgmgmtcamp.eu/schedule/tuesday/keynote1/
|
||||
[10]: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
|
||||
[11]: http://www.infrastructures.org/
|
||||
[12]: https://gist.githubusercontent.com/mariozig/5025613/raw/yolo
|
||||
[13]: https://twitter.com/yamlcamp
|
||||
[14]: https://twitter.com/rjw1
|
||||
[15]: https://github.com/opencfp/opencfp
|
||||
[16]: https://cfgmgmtcamp.eu/speaker/
|
||||
[17]: https://twitter.com/KrisBuytaert
|
||||
[18]: https://twitter.com/toshywoshy
|
||||
[19]: https://krisbuytaert.be/index.shtml
|
@ -1,82 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 steps to becoming an awesome agile developer)
|
||||
[#]: via: (https://opensource.com/article/19/2/steps-agile-developer)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
4 steps to becoming an awesome agile developer
|
||||
======
|
||||
There's no magical way to do it, but these practices will put you well on your way to embracing agile in application development, testing, and debugging.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk)
|
||||
|
||||
Enterprises are rushing into their DevOps journey through [agile][1] software development with cloud-native technologies such as [Linux containers][2], [Kubernetes][3], and [serverless][4]. Continuous integration helps enterprise developers reduce bugs, unexpected errors, and improve the quality of their code deployed in production.
|
||||
|
||||
However, this doesn't mean all developers in DevOps automatically embrace agile for their daily work in application development, testing, and debugging. There is no magical way to do it, but the following four practical steps and best practices will put you well on your way to becoming an awesome agile developer.
|
||||
|
||||
### Start with design thinking agile practices
|
||||
|
||||
There are many opportunities to learn about using agile software development practices in your DevOps initiatives. Agile practices inspire people with new ideas and experiences for improving their daily work in application development with team collaboration. More importantly, those practices will help you discover the answers to questions such as: Why am I doing this? What kind of problems am I trying to solve? How do I measure the outcomes?
|
||||
|
||||
A [domain-driven design][5] approach will help you start discovery sooner and easier. For example, the [Start At The End][6] practice helps you redesign your application and explore potential business outcomes—such as, what would happen if your application fails in production? You might also be interested in [Event Storming][7] for interactive and rapid discovery or [Impact Mapping][8] for graphical and strategic design as part of domain-driven design practices.
|
||||
|
||||
### Use a predictive approach first
|
||||
|
||||
In agile software development projects, enterprise developers are mainly focused on adapting to rapidly changing app development environments such as reactive runtimes, cloud-native frameworks, Linux container packaging, and the Kubernetes platform. They believe this is the best way to become an agile developer in their organization. However, this type of adaptive approach typically makes it harder for developers to understand and report what they will do in the next sprint. Developers might know the ultimate goal and, at best, the app features for a release about four months from the current sprint.
|
||||
|
||||
In contrast, the predictive approach places more emphasis on analyzing known risks and planning future sprints in detail. For example, predictive developers can accurately report the functions and tasks planned for the entire development process. But it's not a magical way to make your agile projects succeed all the time because the predictive team depends totally on effective early-stage analysis. If the analysis does not work very well, it may be difficult for the project to change direction once it gets started.
|
||||
|
||||
To mitigate this risk, I recommend that senior agile developers increase the predictive capabilities with a plan-driven method, and junior agile developers start with the adaptive methods for value-driven development.
|
||||
|
||||
### Continuously improve code quality
|
||||
|
||||
Don't hesitate to engage in [continuous integration][9] (CI) practices for improving your application before deploying code into production. To adopt modern application frameworks, such as cloud-native architecture, Linux container packaging, and hybrid cloud workloads, you have to learn about automated tools to address complex CI procedures.
|
||||
|
||||
[Jenkins][10] is the standard CI tool for many organizations; it allows developers to build and test applications in many projects in an automated fashion. Its most important function is detecting unexpected errors during CI to prevent them from happening in production. This should increase business outcomes through better customer satisfaction.
|
||||
|
||||
Automated CI enables agile developers to not only improve the quality of their code but their also application development agility through learning and using open source tools and patterns such as [behavior-driven development][11], [test-driven development][12], [automated unit testing][13], [pair programming][14], [code review][15], and [design pattern][16].
|
||||
|
||||
### Never stop exploring communities
|
||||
|
||||
Never settle, even if you already have a great reputation as an agile developer. You have to continuously take on bigger challenges to make great software in an agile way.
|
||||
|
||||
By participating in the very active and growing open source community, you will not only improve your skills as an agile developer, but your actions can also inspire other developers who want to learn agile practices.
|
||||
|
||||
How do you get involved in specific communities? It depends on your interests and what you want to learn. It might mean presenting specific topics at conferences or local meetups, writing technical blog posts, publishing practical guidebooks, committing code, or creating pull requests to open source projects' Git repositories. It's worth exploring open source communities for agile software development, as I've found it is a great way to share your expertise, knowledge, and practices with other brilliant developers and, along the way, help each other.
|
||||
|
||||
### Get started
|
||||
|
||||
These practical steps can give you a shorter path to becoming an awesome agile developer. Then you can lead junior developers in your team and organization to become more flexible, valuable, and predictive using agile principles.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/steps-agile-developer
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/18/10/what-agile
|
||||
[2]: https://opensource.com/resources/what-are-linux-containers
|
||||
[3]: https://opensource.com/resources/what-is-kubernetes
|
||||
[4]: https://opensource.com/article/18/11/open-source-serverless-platforms
|
||||
[5]: https://en.wikipedia.org/wiki/Domain-driven_design
|
||||
[6]: https://openpracticelibrary.com/practice/start-at-the-end/
|
||||
[7]: https://openpracticelibrary.com/practice/event-storming/
|
||||
[8]: https://openpracticelibrary.com/practice/impact-mapping/
|
||||
[9]: https://en.wikipedia.org/wiki/Continuous_integration
|
||||
[10]: https://jenkins.io/
|
||||
[11]: https://en.wikipedia.org/wiki/Behavior-driven_development
|
||||
[12]: https://en.wikipedia.org/wiki/Test-driven_development
|
||||
[13]: https://en.wikipedia.org/wiki/Unit_testing
|
||||
[14]: https://en.wikipedia.org/wiki/Pair_programming
|
||||
[15]: https://en.wikipedia.org/wiki/Code_review
|
||||
[16]: https://en.wikipedia.org/wiki/Design_pattern
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What blockchain and open source communities have in common)
|
||||
[#]: via: (https://opensource.com/article/19/2/blockchain-open-source-communities)
|
||||
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
|
||||
|
||||
What blockchain and open source communities have in common
|
||||
======
|
||||
Blockchain initiatives can look to open source governance for lessons on establishing trust.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity_flowers_chain.jpg?itok=ns01UPOp)
|
||||
|
||||
One of the characteristics of blockchains that gets a lot of attention is how they enable distributed trust. The topic of trust is a surprisingly complicated one. In fact, there's now an [entire book][1] devoted to the topic by Kevin Werbach.
|
||||
|
||||
But here's what it means in a nutshell. Organizations that wish to work together, but do not fully trust one another, can establish a permissioned blockchain and invite business partners to record their transactions on a shared distributed ledger. Permissioned blockchains can trace assets when transactions are added to the blockchain. A permissioned blockchain implies a degree of trust (again, trust is complicated) among members of a consortium, but no single entity controls the storage and validation of transactions.
|
||||
|
||||
The basic model is that a group of financial institutions or participants in a logistics system can jointly set up a permissioned blockchain that will validate and immutably record transactions. There's no dependence on a single entity, whether it's one of the direct participants or a third-party intermediary who set up the blockchain, to safeguard the integrity of the system. The blockchain itself does so through a variety of cryptographic mechanisms.
|
||||
|
||||
Here's the rub though. It requires that competitors work together cooperatively—a relationship often called [coopetition][2]. The term dates back to the early 20th century, but it grew into widespread use when former Novell CEO Ray Noorda started using the term to describe the company's business strategy in the 1990s. Novell was then planning to get into the internet portal business, which required it to seek partnerships with some of the search engine providers and other companies it would also be competing against. In 1996, coopetition became the subject of a bestselling [book][3].
|
||||
|
||||
Coopetition can be especially difficult when a blockchain network initiative appears to be driven by a dominant company. And it's hard for the dominant company not to exert outsize influence over the initiative, just as a natural consequence of how big it is. For example, the IBM-Maersk joint venture has [struggled to sign up rival shipping companies][4], in part because Maersk is the world's largest carrier by capacity, a position that makes rivals wary.
|
||||
|
||||
We see this same dynamic in open source communities. The original creators of a project need to not only let go; they need to put governance structures in place that give competing companies confidence that there's a level playing field.
|
||||
|
||||
For example, Sarah Novotny, now head of open source strategy at Google Cloud Platform, [told me in a 2017 interview][5] about the [Kubernetes][6] project that it isn't always easy to give up control, even when people buy into doing what is best for a project.
|
||||
|
||||
> Google turned Kubernetes over to the Cloud Native Computing Foundation (CNCF), which sits under the Linux Foundation umbrella. As [CNCF executive director Dan Kohn puts it][7]: "One of the things they realized very early on is that a project with a neutral home is always going to achieve a higher level of collaboration. They really wanted to find a home for it where a number of different companies could participate."
|
||||
>
|
||||
> Defaulting to public may not be either natural or comfortable. "Early on, my first six, eight, or 12 weeks at Google, I think half my electrons in email were spent on: 'Why is this discussion not happening on a public mailing list? Is there a reason that this is specific to GKE [Google Container Engine]? No, there's not a reason,'" said Novotny.
|
||||
|
||||
To be sure, some grumble that open source foundations have become too common and that many are too dominated by paying corporate members. Simon Phipps, currently the president of the Open Source Initiative, gave a talk at OSCON way back in 2015 titled ["Enough Foundations Already!"][8] in which he argued that "before we start another open source foundation, let's agree that what we need protected is software freedom and not corporate politics."
|
||||
|
||||
Nonetheless, while not appropriate for every project, foundations with business, legal, and technical governance are increasingly the model for open source projects that require extensive cooperation among competing companies. A [2017 analysis of GitHub data by the Linux Foundation][9] found a number of different governance models in use by the highest-velocity open source projects. Unsurprisingly, quite a few remained under the control of the company that created or acquired them. However, about a third were under the auspices of a foundation.
|
||||
|
||||
Is there a lesson here for blockchain? Quite possibly. Open source projects can be sponsored by a company while still putting systems and governance in place that are welcoming to outside contributors. However, there's a great deal of history to suggest that doing so is hard because it's hard not to exert control and leverage when you can. Furthermore, even if you make a successful case for being truly open to equal participation to outsiders today, it will be hard to allay suspicions that you might not be as welcoming tomorrow.
|
||||
|
||||
To the degree that we can equate blockchain consortiums with open source communities, this suggests that business blockchain initiatives should look to open source governance for lessons. Dominant players in the ecosystem need to forgo control, and they need to have conversations with partners and potential partners about what types of structures would make participating easier.
|
||||
|
||||
Many blockchain infrastructure software projects are already under foundations such as Hyperledger. But perhaps some specific production deployments of blockchain aimed at specific industries and ecosystems will benefit from formal governance structures as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/blockchain-open-source-communities
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ghaff
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mitpress.mit.edu/books/blockchain-and-new-architecture-trust
|
||||
[2]: https://en.wikipedia.org/wiki/Coopetition
|
||||
[3]: https://en.wikipedia.org/wiki/Co-opetition_(book)
|
||||
[4]: https://www.theregister.co.uk/2018/10/30/ibm_struggles_to_sign_up_shipping_carriers_to_blockchain_supply_chain_platform_reports/
|
||||
[5]: https://opensource.com/article/17/4/podcast-kubernetes-sarah-novotny
|
||||
[6]: https://kubernetes.io/
|
||||
[7]: http://bitmason.blogspot.com/2017/02/podcast-cloud-native-computing.html
|
||||
[8]: https://www.oreilly.com/ideas/enough-foundations-already
|
||||
[9]: https://www.linuxfoundation.org/blog/2017/08/successful-open-source-projects-common/
|
@ -1,80 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 podcasts for Linux news and tips)
|
||||
[#]: via: (https://opensource.com/article/19/2/top-linux-podcasts)
|
||||
[#]: author: (Stephen Bancroft https://opensource.com/users/stevereaver)
|
||||
|
||||
Top 5 podcasts for Linux news and tips
|
||||
======
|
||||
A tried and tested podcast listener, shares his favorite Linux podcasts over the years, plus a couple of bonus picks.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguin-penguins.png?itok=5hlVDue7)
|
||||
|
||||
Like many Linux enthusiasts, I listen to a lot of podcasts. I find my daily commute is the best time to get some time to myself and catch up on the latest tech news. Over the years, I have subscribed and unsubscribed to more show feeds than I care to think about and have distilled them down to the best of the best.
|
||||
|
||||
Here are my top five Linux podcasts I think you should be listening to in 2019, plus a couple of bonus picks.
|
||||
|
||||
5. [**Late Night Linux**][1]—This podcast, hosted by Joe, [Félim][2], [Graham][3], and [Will][4] from the UK, is rough, ready, and pulls no punches. [Joe Ressington][5] is always ready to tell it how it is, and Félim is always quick with his opinions. It's presented in a casual conversation format—but not one to have one with the kids around, especially with subjects they are all passionate about!
|
||||
|
||||
|
||||
4. [**Ask Noah Show**][6]—This show was forked from the Linux Action Show after it ended. Hosted by [Noah Chelliah][7], it's presented in a radio talkback style and takes live calls from listeners—it's syndicated from a local radio station in Grand Forks, North Dakota. The podcast isn't purely about Linux, but Noah takes on technical challenges and solves them with Linux and answers listeners' questions about how to achieve good technical solutions using Linux.
|
||||
|
||||
|
||||
3. [**The Ubuntu Podcast**][8]—If you want the latest about Ubuntu, you can't go past this show. In another podcast with a UK twist, hosts [Alan Pope][9] (Popey), [Mark Johnson][10], and [Martin Wimpress][11] (Wimpy) present a funny and insightful view of the open source community with news directly from Ubuntu.
|
||||
|
||||
|
||||
2. [**Linux Action News**][12]—The title says it all: it's a news show for Linux. This show was spawned from the popular Linux Action Show and is broadcast by the [Jupiter Broadcasting Network][13], which has many other tech-related podcasts. Hosts Chris Fisher and [Joe Ressington][5] present the show in a more formal "evening news" style, which runs around 30 minutes long. If you want to get a quick weekly update on Linux and Linux-related news, this is the show for you.
|
||||
|
||||
|
||||
1. [**Linux Unplugged**][14]—Finally, coming in at the number one spot is the granddaddy of them all, Linux Unplugged. This show gets to the core of what being in the Linux community is all about. Presented as a casual panel-style discussion by [Chris Fisher][15] and [Wes Payne][16], the podcast includes an interactive voice chatroom where listeners can connect and be heard live on the show as it broadcasts.
|
||||
|
||||
|
||||
|
||||
Well, there you have it, my current shortlist of Linux podcasts. It's likely to change in the near future, but for now, I am enjoying every minute these guys put together.
|
||||
|
||||
### Bonus podcasts
|
||||
|
||||
Here are two bonus podcasts you might want to check out.
|
||||
|
||||
**[Choose Linux][17]** is a brand-new podcast that is tantalizing because of its hosts: Joe Ressington of Linux Action News, who is a long-time Linux veteran, and [Jason Evangelho][18], a Forbes writer who recently shot to fame in the open source community with his articles showcasing his introduction to Linux and open source. Living vicariously through Jason's introduction to Linux has been and will continue to be fun.
|
||||
|
||||
[**Command Line Heroes**][19] is a podcast produced by Red Hat. It has a very high production standard and has a slightly different format to the shows I have previously mentioned, anchored by a single presenter, developer, and [CodeNewbie][20] founder [Saron Yitbarek][21], who presents the latest innovations in open source. Now in its second season and released fortnightly, I highly recommend that you start from the first episode of this podcast. It starts with a great intro to the O/S wars of the '90s and sets the foundations for the start of Linux.
|
||||
|
||||
Do you have a favorite Linux podcast that isn't on this list? Please share it in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/top-linux-podcasts
|
||||
|
||||
作者:[Stephen Bancroft][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stevereaver
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://latenightlinux.com/
|
||||
[2]: https://twitter.com/felimwhiteley
|
||||
[3]: https://twitter.com/degville
|
||||
[4]: https://twitter.com/8none1
|
||||
[5]: https://twitter.com/JoeRessington
|
||||
[6]: http://www.asknoahshow.com/
|
||||
[7]: https://twitter.com/kernellinux?lang=en
|
||||
[8]: http://ubuntupodcast.org/
|
||||
[9]: https://twitter.com/popey
|
||||
[10]: https://twitter.com/marxjohnson
|
||||
[11]: https://twitter.com/m_wimpress
|
||||
[12]: https://linuxactionnews.com/
|
||||
[13]: https://www.jupiterbroadcasting.com/
|
||||
[14]: https://linuxunplugged.com/
|
||||
[15]: https://twitter.com/ChrisLAS
|
||||
[16]: https://twitter.com/wespayne
|
||||
[17]: https://chooselinux.show
|
||||
[18]: https://twitter.com/killyourfm
|
||||
[19]: https://www.redhat.com/en/command-line-heroes
|
||||
[20]: https://www.codenewbie.org/
|
||||
[21]: https://twitter.com/saronyitbarek
|
@ -1,99 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How Linux testing has changed and what matters today)
|
||||
[#]: via: (https://opensource.com/article/19/2/phoronix-michael-larabel)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
How Linux testing has changed and what matters today
|
||||
======
|
||||
Michael Larabel, the founder of Phoronix, shares his insights on the evolution of Linux and open hardware.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga)
|
||||
|
||||
If you've ever wondered how your Linux computer stacks up against other Linux, Windows, and MacOS machines or searched for reviews of Linux-compatible hardware, you're probably familiar with [Phoronix][1]. Along with its website, which attracts more than 250 million visitors a year to its Linux reviews and news, the company also offers the [Phoronix Test Suite][2], an open source hardware benchmarking tool, and [OpenBenchmarking.org][3], where test result data is stored.
|
||||
|
||||
According to [Michael Larabel][4], who started Phoronix in 2004, the site "is frequently cited as being the leading source for those interested in computer hardware and Linux. It offers insights regarding the development of the Linux kernel, product reviews, interviews, and news regarding free and open source software."
|
||||
|
||||
I recently had the opportunity to interview Michael about Phoronix and his work.
|
||||
|
||||
The questions and answers have been edited for length and clarity.
|
||||
|
||||
**Don Watkins:** What inspired you to start Phoronix?
|
||||
|
||||
**Michael Larabel:** When I started [Phoronix.com][5] in June 2004, it was still challenging to get a mouse or other USB peripherals working on the popular distributions of the time, like Mandrake, Yoper, MEPIS, and others. So, I set out to work on reviewing different hardware components and their compatibility with Linux. Over time, that shifted more from "does the basic device work?" to how well they perform and what features are supported or unsupported under Linux.
|
||||
|
||||
It's been interesting to see the evolution and the importance of Linux on hardware rise. Linux was very common to LAMP/web servers, but Linux has also become synonymous with high-performance computing (HPC), Android smartphones, cloud software, autonomous vehicles, edge computing, digital signage, and related areas. While Linux hasn't quite dominated the desktop, it's doing great practically everywhere else.
|
||||
|
||||
I also developed the Phoronix Test Suite, with its initial 1.0 public release in 2008, to increase the viability of testing on Linux, engage with more hardware and software vendors on best practices for testing, and just get more test cases running on Linux. At the time, there weren't any really shiny benchmarks on Linux like there were on Windows.
|
||||
|
||||
**DW:** Who are your website's readers?
|
||||
|
||||
**ML:** Phoronix's audience is as diverse as the content. Initially, it was quite desktop/gamer/enthusiast oriented, but as Linux's dominance has grown in HPC, cloud, embedded, etc., my testing has expanded in those areas and thus so has the readership. Readers tend to be interested in open source/Linux ecosystem advancements, performance, and a slight bent towards graphics processor and hardware driver interests.
|
||||
|
||||
**DW:** How important is testing in the Linux world and how has it changed from when you started?
|
||||
|
||||
**ML:** Testing has changed radically since 2004. Back then, many open source projects weren't carrying out any continuous integration (CI) or testing for regressions—both functional issues and performance problems. The hardware vendors supporting Linux were mostly trying to get things working and maintained while being less concerned about performance or scratching away at catching up to Mac, Solaris, and Windows. With time, we've seen the desktop reach close parity with (or exceed, depending upon your views) alternative operating systems. Most PC hardware now works out-of-the-box on Linux, most open source projects engage in some form of CI or testing, and more time and resources are afforded to advancing Linux performance. With high-frequency trading and cloud platforms relying on Linux, performance has become of utmost importance.
|
||||
|
||||
Most of my testing at Phoronix.com is focused on benchmarking processors, graphics cards, storage devices, and other areas of interest to gamers and enthusiasts, but also interesting server platforms. Readers are also quite interested in testing of software components like the Linux kernel, code compilers, and filesystems. But in terms of the Phoronix Test Suite, its scope is rather limitless, with a framework in which new tests can be easily added and automated. There are currently more than 1,000 different profiles/suites, and new ones are routinely added—from machine learning tests to traditional benchmarks.
|
||||
|
||||
**DW:** How important is open source hardware? Where do you see it going?
|
||||
|
||||
**ML:** Open hardware is of increasing importance, especially in light of all the security vulnerabilities and disclosures in recent years. Facebook's work on the [Open Compute Project][6] can be commended, as can Google leveraging [Coreboot][7] in its Chromebook devices, and [Raptor Computing Systems][8]' successful, high-performance, open source POWER9 desktops/workstations/servers. [Intel][9] potentially open sourcing its firmware support package this year is also incredibly tantalizing and will hopefully spur more efforts in this space.
|
||||
|
||||
Outside of that, open source hardware has had a really tough time cracking the consumer space due to the sheer amount of capital necessary and the complexities of designing a modern chip, etc., not to mention competing with the established hardware vendors' marketing budgets and other resources. So, while I would love for 100% open source hardware to dominate—or even compete in features and performance with proprietary hardware—in most segments, that is sadly unlikely to happen, especially with open hardware generally being much more expensive due to economies of scale.
|
||||
|
||||
Software efforts like [OpenBMC][10], Coreboot/[Libreboot][11], and [LinuxBoot][12] are opening up hardware much more. Those efforts at liberating hardware have proven successful and will hopefully continue to be endorsed by more organizations.
|
||||
|
||||
As for [OSHWA][13], I certainly applaud their efforts and the enthusiasm they bring to open source hardware. Certainly, for niche and smaller-scale devices, open source hardware can be a great fit. It will certainly be interesting to see what comes about with OSHWA and some of its partners like Lulzbot, Adafruit, and System76.
|
||||
|
||||
**DW:** Can people install Phoronix Test Suite on their own computers?
|
||||
|
||||
ML: The Phoronix Test Suite benchmarking software is open source under the GPL and can be downloaded from [Phoronix-Test-Suite.com][2] and [GitHub][14]. The benchmarking software works on not only Linux systems but also MacOS, Solaris, BSD, and Windows 10/Windows Server. The Phoronix Test Suite works on x86/x86_64, ARM/AArch64, POWER, RISC-V, and other architectures.
|
||||
|
||||
**DW:** How does [OpenBenchmarking.org][15] work with the Phoronix Test Suite?
|
||||
|
||||
**ML:** OpenBenchmarking.org is, in essence, the "cloud" component to the Phoronix Test Suite. It stores test profiles/test suites in a package manager-like fashion, allows users to upload their own benchmarking results, and offers related functionality around our benchmarking software.
|
||||
|
||||
OpenBenchmarking.org is seamlessly integrated into the Phoronix Test Suite, but from the web interface, it is also where anyone can see the public benchmark results, inspect the open source test profiles to understand their methodology, research hardware and software data, and use similar functionality.
|
||||
|
||||
Another component developed as part of the Phoronix Test Suite is [Phoromatic][16], which effectively allows anyone to deploy their own OpenBenchmarking-like environment within their own private intranet/LAN. This allows organizations to archive their benchmark results locally (and privately), orchestrate benchmarks automatically against groups of systems, manage the benchmark systems, and develop new test cases.
|
||||
|
||||
**DW:** How can people stay up to date on Phoronix?
|
||||
|
||||
**ML:** You can follow [me][17], [Phoronix][18], [Phoronix Test Suite][19], and [OpenBenchMarking.org][20] on Twitter.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/phoronix-michael-larabel
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.phoronix.com/
|
||||
[2]: https://www.phoronix-test-suite.com/
|
||||
[3]: https://openbenchmarking.org/
|
||||
[4]: https://www.michaellarabel.com/
|
||||
[5]: http://Phoronix.com
|
||||
[6]: https://www.opencompute.org/
|
||||
[7]: https://www.coreboot.org/
|
||||
[8]: https://www.raptorcs.com/
|
||||
[9]: https://www.phoronix.com/scan.php?page=news_item&px=Intel-Open-Source-FSP-Likely
|
||||
[10]: https://en.wikipedia.org/wiki/OpenBMC
|
||||
[11]: https://libreboot.org/
|
||||
[12]: https://linuxboot.org/
|
||||
[13]: https://www.oshwa.org/
|
||||
[14]: https://github.com/phoronix-test-suite/
|
||||
[15]: http://OpenBenchmarking.org
|
||||
[16]: http://www.phoronix-test-suite.com/index.php?k=phoromatic
|
||||
[17]: https://twitter.com/michaellarabel
|
||||
[18]: https://twitter.com/phoronix
|
||||
[19]: https://twitter.com/Phoromatic
|
||||
[20]: https://twitter.com/OpenBenchmark
|
@ -1,136 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How our non-profit works openly to make education accessible)
|
||||
[#]: via: (https://opensource.com/open-organization/19/2/building-curriculahub)
|
||||
[#]: author: (Tanner Johnson https://opensource.com/users/johnsontanner3)
|
||||
|
||||
How our non-profit works openly to make education accessible
|
||||
======
|
||||
To build an open access education hub, our team practiced the same open methods we teach our students.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Education_2_OpenAccess_1040x584_12268077_0614MM.png?itok=xb96iaHe)
|
||||
|
||||
I'm lucky to work with a team of impressive students at Duke University who are leaders in their classrooms and beyond. As members of [CSbyUs][1], a non-profit and student-run organization based at Duke, we connect university students to middle school students, mostly from [title I schools][2] across North Carolina's Research Triangle Park. Our mission is to fuel future change agents from under-resourced learning environments by fostering critical technology skills for thriving in the digital age.
|
||||
|
||||
The CSbyUs Tech R&D team (TRD for short) recently set an ambitious goal to build and deploy a powerful web application over the course of one fall semester. Our team of six knew we had to do something about our workflow to ship a product by winter break. In our middle school classrooms, we teach our learners to use agile methodologies and design thinking to create mobile applications. On the TRD team, we realized we needed to practice what we preach in those classrooms to ship a quality product by semester's end.
|
||||
|
||||
This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
|
||||
|
||||
### Setting the scene
|
||||
|
||||
For the past two years, CSbyUs has operated "on the ground," connecting Duke undergraduates to Durham middle schools via after-school programming. After teaching and evaluating several iterations of our unique, student-centered mobile app development curriculum, we saw promising results. Our middle schoolers were creating functional mobile apps, connecting to their mentors, and leaving the class more confident in their computer science skills. Naturally, we wondered how to expand our programming.
|
||||
|
||||
We knew we should take our own advice and lean into web-based technologies to share our work, but we weren't immediately sure what problem we needed to solve. Ultimately, we decided to create a web app that serves as a centralized hub for open source and open access digital education curricula. "CurriculaHub" (name inspired by GitHub) would be the defining pillar of CSbyUs's new website, where educators could share and adapt resources.
|
||||
|
||||
But the vision and implementation didn't happen overnight.
|
||||
|
||||
Given our sense of urgency and the potential of "CurriculaHub," we wanted to start this project with a well defined plan. The stakes were (and are) high, so planning, albeit occasionally tedious, was critical to our success. Like the curriculum we teach, we scaffolded our workflow process with design thinking and agile methodology, two critical 21st century frameworks we often fail to practice in higher ed.
|
||||
|
||||
What follows is a step-wise explanation of our design thinking process, starting from inspiration and ending in a shipped prototype.
|
||||
|
||||
```
|
||||
This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
|
||||
```
|
||||
|
||||
### Our Process
|
||||
|
||||
#### **Step 1: Pre-Work**
|
||||
|
||||
In order to understand the why to our what, you have to know who our team is.
|
||||
|
||||
The members of this team are busy. All of us contribute to CSbyUs beyond our TRD-related responsibilities. As an organization with lofty goals beyond creating a web-based platform, we have to reconcile our "on the ground" commitments (i.e., curriculum curation, research and evaluation, mentorship training and practice, presentations at conferences, etc.) with our "in the cloud" technological goals.
|
||||
|
||||
In addition to balancing time across our organization, we have to be flexible in the ways we communicate. As a remote member of the team, I'm writing this post from Spain, but the rest of our team is based in North Carolina, adding collaboration challenges.
|
||||
|
||||
Before diving into development (or even problem identification), we knew we had to set some clear expectations for how we'd operate as a team. We took a note from our curriculum team's book and started with some [rules of engagement][3]. This is actually [a well-documented approach][4] to setting up a team's [social contract][5] used by teams across the tech space. During a summer internship at IBM, I remember pre-project meetings where my manager and team spent more than an hour clarifying principles of interaction. Whenever we faced uncertainty in our team operations, we'd pull out the rules of engagement and clear things up almost immediately. (An aside: I've found this strategy to be wildly effective not only in my teams, but in all relationships).
|
||||
|
||||
Considering the remote nature of our team, one of our favorite tools is Slack. We use it for almost everything. We can't have sticky-note brainstorms, so we create Slack brainstorm threads. In fact, that's exactly what we did to generate our rules of engagement. One [open source principle we take to heart is transparency][6]; Slack allows us to archive and openly share our thought processes and decision-making steps with the rest of our team.
|
||||
|
||||
#### **Step 2: Empathy Research**
|
||||
|
||||
We're all here for unique reasons, but we find a common intersection: the desire to broaden equity in access to quality digital era education.
|
||||
|
||||
Each member of our team has been lucky enough to study at Duke. We know how it feels to have limitless opportunities and the support of talented peers and renowned professors. But we're mindful that this isn't normal. Across the country and beyond, these opportunities are few and far between. Where they do exist, they're confined within the guarded walls of higher institutes of learning or come with a lofty price tag.
|
||||
|
||||
While our team members' common desire to broaden access is clear, we work hard to root our decisions in research. So our team begins each semester [reviewing][7] [research][8] that justifies our existence. TRD works with CRD (curriculum research and development) and TT (teaching team), our two other CSbyUs sub-teams, to discuss current trends in digital education access, their systemic roots, and novel approaches to broaden access and make materials relevant to learners. We not only perform research collaboratively at the beginning of the semester but also implement weekly stand-up research meetings with the sub-teams. During these, CRD often presents new findings we've gleaned from interviewing current teachers and digging into the current state of access in our local community. They are our constant source of data-driven, empathy-fueling research.
|
||||
|
||||
Through this type of empathy-based research, we have found that educators interested in student-centered teaching and digital era education lack a centralized space for proven and adaptable curricula and lesson plans. The bureaucracy and rigid structures that shape classroom learning in the United States makes reshaping curricula around the personal needs of students daunting and seemingly impossible. As students, educators, and technologists, we wondered how we might unleash the creativity and agency of others by sharing our own resources and creating an online ecosystem of support.
|
||||
|
||||
#### **Step 3: Defining the Problem**
|
||||
|
||||
We wanted to avoid [scope creep][9] caused by a poorly defined mission and vision (something that happens too often in some organizations). We needed structures to define our goals and maintain clarity in scope. Before imagining our application features, we knew we'd have to start with defining our north star. We would generate a clear problem statement to which we could refer throughout development.
|
||||
|
||||
Before imagining our application features, we knew we'd have to start with defining our north star.
|
||||
|
||||
This is common practice for us. Before committing to new programming, new partnerships, or new changes, the CSbyUs team always refers back to our mission and vision and asks, "Does this make sense?" (in fact, we post our mission and vision to the top of every meeting minutes document). If it fits and we have capacity to pursue it, we go for it. And if we don't, then we don't. In the case of a "no," we are always sure to document what and why because, as engineers know, [detailed logs are almost always a good decision][10]. TRD gleaned that big-picture wisdom and implemented a group-defined problem statement to guide our sub-team mission and future development decisions.
|
||||
|
||||
To formulate a single, succinct problem statement, we each began by posting our own takes on the problem. Then, during one of our weekly [30-minute-no-more-no-less stand-up meetings][11], we identified commonalities and differences, ultimately [merging all our ideas into one][12]. Boiled down, we identified that there exist massive barriers for educators, parents, and students to share, modify, and discuss open source and accessible curricula. And of course, our mission would be to break down those barriers with user-centered technology. This "north star" lives as a highly visible document in our Google Drive, which has influenced our feature prioritization and future directions.
|
||||
|
||||
#### **Step 4: Ideating a Solution**
|
||||
|
||||
With our problem defined and our rules of engagement established, we were ready to imagine a solution.
|
||||
|
||||
We believe that effective structures can ensure meritocracy and community. Sometimes, certain personalities dominate team decision-making and leave little space for collaborative input. To avoid that pitfall and maximize our equality of voice, we tend to use "offline" individual brainstorms and merge collective ideas online. It's the same process we used to create our rules of engagement and problem statement. In the case of ideating a solution, we started with "offline" brainstorms of three [S.M.A.R.T. goals][13]. Those goals would be ones we could achieve as a software development team (specifically because the CRD and TT teams offer different skill sets) and address our problem statement. Finally, we wrote these goals in a meeting minutes document, clustering common goals and ultimately identifying themes that describe our application features. In the end, we identified three: support, feedback, and open source curricula.
|
||||
|
||||
From here, we divided ourselves into sub-teams, repeating the goal-setting process with those teams—but in a way that was specific to our features. And if it's not obvious by now, we realized a web-based platform would be the most optimal and scalable solution for supporting students, educators, and parents by providing a hub for sharing and adapting proven curricula.
|
||||
|
||||
To work efficiently, we needed to be adaptive, reinforcing structures that worked and eliminating those that didn't. For example, we put a lot of effort in crafting meeting agendas. We strive to include only those subjects we must discuss in-person and table everything else for offline discussions on Slack or individually organized calls. We practice this in real time, too. During our regular meetings on Google Hangouts, if someone brings up a topic that isn't highly relevant or urgent, the current stand-up lead (a role that rotates weekly) "parking lots" it until the end of the meeting. If we have space at the end, we pull from the parking lot, and if not, we reserve that discussion for a Slack thread.
|
||||
|
||||
This prioritization structure has led to massive gains in meeting efficiency and a focus on progress updates, shared technical hurdle discussions, collective decision-making, and assigning actionable tasks (the next-steps a person has committed to taking, documented with their name attached for everyone to view).
|
||||
|
||||
#### **Step 5: Prototyping**
|
||||
|
||||
This is where the fun starts.
|
||||
|
||||
Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies.
|
||||
|
||||
Given our requirements—like an interactive user experience, the ability to collaborate on blogs and curricula, and the ability to receive feedback from our users—we began identifying the best technologies. Ultimately, we decided to build our web app with a ReactJS frontend and a Ruby on Rails backend. We chose these due to the extensive documentation and active community for both, and the well-maintained libraries that bridge the relationship between the two (e.g., react-on-rails). Since we chose Rails for our backend, it was obvious from the start that we'd work within a Model-View-Controller framework.
|
||||
|
||||
Most of us didn't have previous experience with web development, neither on the frontend nor the backend. So, getting up and running with either technology independently presented a steep learning curve, and gluing the two together only steepened it. To centralize our work, we use an open-access GitHub repository. Given our relatively novice experience in web development, our success hinged on extremely efficient and open collaborations.
|
||||
|
||||
And to explain that, we need to revisit the idea of structures. Some of ours include peer code reviews—where we can exchange best-practices and reusable solutions, maintaining up-to-date tech and user documentation so we can look back and understand design decisions—and (my personal favorite) our questions bot on Slack, which gently reminds us to post and answer questions in a separate Slack #questions channel.
|
||||
|
||||
We've also dabbled with other strategies, like instructional videos for generating basic React components and rendering them in Rails Views. I tried this and in my first video, [I covered a basic introduction to our repository structure][14] and best practices for generating React components. While this proved useful, our team has since realized the wealth of online resources that document various implementations of these technologies robustly. Also, we simply haven't had enough time (but we might revisit them in the future—stay tuned).
|
||||
|
||||
We're also excited about our cloud-based implementation. We use Heroku to host our application and manage data storage. In next iterations, we plan to both expand upon our current features and configure a continuous iteration/continuous development pipeline using services like Jenkins integrated with GitHub.
|
||||
|
||||
#### **Step 6: Testing**
|
||||
|
||||
Since we've [just deployed][1], we are now in a testing stage. Our goals are to collect user feedback across our feature domains and our application experience as a whole, especially as they interact with our specific audiences. Given our original constraints (namely, time and people power), this iteration is the first of many to come. For example, future iterations will allow for individual users to register accounts and post external curricula directly on our site without going through the extra steps of email. We want to scale and maximize our efficiency, and that's part of the recipe we'll deploy in future iterations. As for user testing: We collect user feedback via our contact form, via informal testing within our team, and via structured focus groups. [We welcome your constructive feedback and collaboration][15].
|
||||
|
||||
Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies. Luckily enough, each one I described in this post is adaptable to virtually every team.
|
||||
|
||||
Regardless of whether you work—on a software development team, in a classroom, or, heck, [even in your family][16]—principles like transparency and community are almost always the best foundation for a successful organization.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/2/building-curriculahub
|
||||
|
||||
作者:[Tanner Johnson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/johnsontanner3
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://csbyus.org
|
||||
[2]: https://www2.ed.gov/programs/titleiparta/index.html
|
||||
[3]: https://docs.google.com/document/d/1tqV6B6Uk-QB7Psj1rX9tfCyW3E64_v6xDlhRZ-L2rq0/edit
|
||||
[4]: https://www.atlassian.com/team-playbook/plays/rules-of-engagement
|
||||
[5]: https://openpracticelibrary.com/practice/social-contract/
|
||||
[6]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[7]: https://services.google.com/fh/files/misc/images-of-computer-science-report.pdf
|
||||
[8]: https://drive.google.com/file/d/1_iK0ZRAXVwGX9owtjUUjNz3_2kbyYZ79/view?usp=sharing
|
||||
[9]: https://www.pmi.org/learning/library/top-five-causes-scope-creep-6675
|
||||
[10]: https://www.codeproject.com/Articles/42354/The-Art-of-Logging#what
|
||||
[11]: https://opensource.com/open-organization/16/2/6-steps-running-perfect-30-minute-meeting
|
||||
[12]: https://docs.google.com/document/d/1wdPRvFhMKPCrwOG2CGp7kP4rKOXrJKI77CgjMfaaXnk/edit?usp=sharing
|
||||
[13]: https://www.projectmanager.com/blog/how-to-create-smart-goals
|
||||
[14]: https://www.youtube.com/watch?v=52kvV0plW1E
|
||||
[15]: http://csbyus.org/
|
||||
[16]: https://opensource.com/open-organization/15/11/what-our-families-teach-us-about-organizational-life
|
@ -1,72 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( jiangyaomin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Teaching scientists how to share code)
|
||||
[#]: via: (https://opensource.com/article/19/2/open-science-git)
|
||||
[#]: author: (Jon Tennant https://opensource.com/users/jon-tennant)
|
||||
|
||||
Teaching scientists how to share code
|
||||
======
|
||||
This course teaches them how to set up a GitHub project, index their project in Zenodo, and integrate Git into an RStudio workflow.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_051x_0.png?itok=gIzbmxuI)
|
||||
|
||||
Would it surprise you to learn that most of the world's scholarly research is not owned by the people who funded it or who created it? Rather it's owned by private corporations and locked up in proprietary systems, leading to [problems][1] around sharing, reuse, and reproducibility.
|
||||
|
||||
The open science movement is challenging this system, aiming to give researchers control, ownership, and freedom over their work. The [Open Science MOOC][2] (massively open online community) is a mission-driven project launched in 2018 to kick-start an open scientific revolution and foster more partnerships between open source software and open science.
|
||||
|
||||
The Open Science MOOC is a peer-to-peer community of practice, based around sharing knowledge and ideas, learning new skills, and using these things to develop as individuals so research communities can grow as part of a wider cultural shift towards openness.
|
||||
|
||||
### The curriculum
|
||||
|
||||
The Open Science MOOC is divided into 10 core modules, from the principles of open science to becoming an open science advocate.
|
||||
|
||||
The first module, [Open Research Software and Open Source][3], was released in late 2018. It includes three main tasks, all designed to help make research workflows more efficient and more open for collaboration:
|
||||
|
||||
#### 1\. Setting up your first GitHub project
|
||||
|
||||
GitHub is a powerful project management tool, both for coders and non-coders. This task teaches how to create a community around the platform, select an appropriate license, and write good documentation (including README files, contributing guidelines, and codes of conduct) to foster open collaboration and a welcoming community.
|
||||
|
||||
#### 2\. Indexing your project in Zenodo
|
||||
|
||||
[Zenodo][4] is an open science platform that seamlessly integrates with GitHub to help make projects more permanent, reusable, and citable. This task explains how webhooks between Zenodo and GitHub allow new versions of projects to become permanently archived as they progress. This is critical for helping researchers get a [DOI][5] for their work so they can receive full credit for all aspects of a project. As citations are still a primary form of "academic capital," this is essential for researchers.
|
||||
|
||||
#### 3\. Integrating Git into an RStudio workflow
|
||||
|
||||
This task is about giving research a mega-boost through greater collaborative efficiency and reproducibility. Git enables version control in all forms of text-based content, including data analysis and writing papers. Each time you save your work during the development process, Git saves time-stamped copies. This saves the hassle of trying to "roll back" projects if you delete a file or text by mistake, and eliminates horrific file-naming conventions. (For example, does FINAL_Revised_2.2_supervisor_edits_ver1.7_scream.txt look familiar?) Getting Git to interface with RStudio is the painful part, but this task goes through it, step by step, to ease the stress.
|
||||
|
||||
The third task also gives students the ability to interact directly with the MOOC by submitting pull requests to demonstrate their skills. This also adds their name to an online list of open source champions (aka "open sourcerers").
|
||||
|
||||
The MOOC's inherently interactive style is much more valuable than listening to someone talk at you, either on or off screen, like with many traditional online courses or educational programs. Each task is backed up by expert-gathered knowledge, so students get a rigorous, dual-learning experience.
|
||||
|
||||
### Empowering researchers
|
||||
|
||||
The Open Science MOOC strives to be as open as possible—this means we walk the walk and talk the talk. We are built upon a solid values-based foundation of freedom and equitable access to research. We see this route towards widespread adoption of best scientific practices as an essential part of the research process.
|
||||
|
||||
Everything we produce is openly developed and openly licensed for maximum engagement, sharing, and reuse. An open source workflow underpins our development. All of this happens openly around channels such as [Slack][6] and [GitHub][7] and helps to make the community much more coherent.
|
||||
|
||||
If we can instill the value of open source into modern research, this would empower current and future generations of researchers to think more about fundamental freedoms around knowledge production. We think that is something worth working towards as a community.
|
||||
|
||||
The Open Science MOOC combines the best elements of the open education, open science, and open source worlds. If you're ready to join, [sign up for the full course][3], which is, of course, free.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/open-science-git
|
||||
|
||||
作者:[Jon Tennant][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/jiangyaomin)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jon-tennant
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.theguardian.com/science/political-science/2018/jun/29/elsevier-are-corrupting-open-science-in-europe
|
||||
[2]: https://opensciencemooc.eu/
|
||||
[3]: https://eliademy.com/catalog/oer/module-5-open-research-software-and-open-source.html
|
||||
[4]: https://zenodo.org/
|
||||
[5]: https://en.wikipedia.org/wiki/Digital_object_identifier
|
||||
[6]: https://osmooc.herokuapp.com/
|
||||
[7]: https://open-science-mooc-invite.herokuapp.com/
|
@ -1,73 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's happening in the OpenStack community?)
|
||||
[#]: via: (https://opensource.com/article/19/3/whats-happening-openstack)
|
||||
[#]: author: (Jonathan Bryce https://opensource.com/users/jonathan-bryce)
|
||||
|
||||
What's happening in the OpenStack community?
|
||||
======
|
||||
|
||||
In many ways, 2018 was a transformative year for the OpenStack Foundation.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/travel-mountain-cloud.png?itok=ZKsJD_vb)
|
||||
|
||||
Since 2010, the OpenStack community has been building open source software to run cloud computing infrastructure. Initially, the focus was public and private clouds, but open infrastructure has been pulled into many new important use cases like telecoms, 5G, and manufacturing IoT.
|
||||
|
||||
As OpenStack software matured and grew in scope to support new technologies like bare metal provisioning and container infrastructure, the community widened its thinking to embrace users who deploy and run the software in addition to the developers who build the software. Questions like, "What problems are users trying to solve?" "Which technologies are users trying to integrate?" and "What are the gaps?" began to drive the community's thinking and decision making.
|
||||
|
||||
In response to those questions, the OSF reorganized its approach and created a new "open infrastructure" framework focused on use cases, including edge, container infrastructure, CI/CD, and private and hybrid cloud. And, for the first time, the OSF is hosting open source projects outside of the OpenStack project.
|
||||
|
||||
Following are three highlights from the [OSF 2018 Annual Report][1]; I encourage you to read the entire report for more detailed information about what's new.
|
||||
|
||||
### Pilot projects
|
||||
|
||||
On the heels of launching [Kata Containers][2] in December 2017, the OSF launched three pilot projects in 2018—[Zuul][3], [StarlingX][4], and [Airship][5]—that help further our goals of taking our technology into additional relevant markets. Each project follows the tenets we consider key to the success of true open source, [the Four Opens][6]: open design, open collaboration, open development, and open source. While these efforts are still new, they have been extremely valuable in helping us learn how we should expand the scope of the OSF, as well as showing others the kind of approach we will take.
|
||||
|
||||
While the OpenStack project remained at the core of the team's focus, pilot projects are helping expand usage of open infrastructure across markets and already benefiting the OpenStack community. This has attracted dozens of new developers to the open infrastructure community, which will ultimately benefit the OpenStack community and users.
|
||||
|
||||
There is direct benefit from these contributors working upstream in OpenStack, such as through StarlingX, as well as indirect benefit from the relationships we've built with the Kubernetes community through the Kata Containers project. Airship is similarly bridging the gaps between the Kubernetes and OpenStack ecosystems. This shows users how the technologies work together. A rise in contributions to Zuul has provided the engine for OpenStack CI and keeps our development running smoothly.
|
||||
|
||||
### Containers collaboration
|
||||
|
||||
In addition to investing in new pilot projects, we continued efforts to work with key adjacent projects in 2018, and we made particularly good progress with Kubernetes. OSF staffer Chris Hoge helps lead the cloud provider special interest group, where he has helped standardize how Kubernetes deployments expect to run on top of various infrastructure. This has clarified OpenStack's place in the Kubernetes ecosystem and led to valuable integration points, like having OpenStack as part of the Kubernetes release testing process.
|
||||
|
||||
Additionally, OpenStack Magnum was certified as a Kubernetes installer by the CNCF. Through the Kata Containers community, we have deepened these relationships into additional areas within the container ecosystem resulting in a number of companies getting involved for the first time.
|
||||
|
||||
### Evolving events
|
||||
|
||||
We knew heading into 2018 that the environment around our events was changing and we needed to respond. During the year, we held two successful project team gatherings (PTGs) in Dublin and Denver, reaching capacity for both events while also including new projects and OpenStack operators. We held OpenStack Summits in Vancouver and Berlin, both experiencing increases in attendance and project diversity since Sydney in 2017, with each Summit including more than 30 open source projects. Recognizing this broader audience and the OSF's evolving strategy, the OpenStack Summit was renamed the [Open Infrastructure Summit][7], beginning with the Denver event coming up in April.
|
||||
|
||||
In 2018, we boosted investment in China, onboarding a China Community Manager based in Shanghai and hosting a strategy day in Beijing with 30+ attendees from Gold and Platinum Members in China. This effort will continue in 2019 as we host our first Summit in China: the [Open Infrastructure Summit Shanghai][8] in November.
|
||||
|
||||
We also worked with the community in 2018 to define a new model for events to maximize participation while saving on travel and expenses for the individuals and companies who are increasingly stretched across multiple open source communities. We arrived at a plan that we will implement and iterate on in 2019 where we will collocate PTGs as standalone events adjacent to our Open Infrastructure Summits.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
We've seen impressive progress, but the biggest accomplishment might be in establishing a framework for the future of the foundation itself. In 2018, we advanced the open infrastructure mission by establishing OSF as an effective place to collaborate for CI/CD, container infrastructure, and edge computing, in addition to the traditional public and private cloud use cases. The open infrastructure approach opened a lot of doors in 2018, from the initial release of software from each pilot project, to live 5G demos, to engagement with hyperscale public cloud providers.
|
||||
|
||||
Ultimately, our value comes from the effectiveness of our communities and the software they produce. As 2019 unfolds, our community is excited to apply learnings from 2018 to the benefit of developers, users, and the commercial ecosystem across all our projects.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/whats-happening-openstack
|
||||
|
||||
作者:[Jonathan Bryce][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jonathan-bryce
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.openstack.org/foundation/2018-openstack-foundation-annual-report
|
||||
[2]: https://katacontainers.io/
|
||||
[3]: https://zuul-ci.org/
|
||||
[4]: https://www.starlingx.io/
|
||||
[5]: https://www.airshipit.org/
|
||||
[6]: https://www.openstack.org/four-opens/
|
||||
[7]: https://www.openstack.org/summit/denver-2019/
|
||||
[8]: https://www.openstack.org/summit/shanghai-2019
|
@ -1,100 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to pack an IT travel kit)
|
||||
[#]: via: (https://opensource.com/article/19/3/it-toolkit-remote)
|
||||
[#]: author: (Peter Cheer https://opensource.com/users/petercheer)
|
||||
|
||||
How to pack an IT travel kit
|
||||
======
|
||||
Before you travel, make sure you're ready for challenges in hardware, infrastructure, and software.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
I've had several opportunities to do IT work in less-developed and remote areas, where internet coverage and technology access aren't at the high level we have in our first-world cities. Many people heading off to undeveloped areas ask me for advice on preparing for the local technology landscape. Since conditions vary greatly around this big world, it's impossible to give specific advice for most areas, but I do have some general suggestions based on my experience that may help you.
|
||||
|
||||
Also, before you leave home, do as much research as you can about the general IT and telecom environment where you are traveling so you're a little better prepared for what you may encounter there.
|
||||
|
||||
### Planning for the local hardware and infrastructure
|
||||
|
||||
* Even in many cities, internet connections tend to be expensive, slow, and not reliable for large downloads. Don't forget that internet coverage, speeds, and cost in cities are unlikely to be matched in more remote areas.
|
||||
|
||||
|
||||
* The electricity supply may be unreliable with inconsistent voltage. If you are taking your computer, bring some surge protection—although in my experience, the electricity voltage is more likely to drop than to spike.
|
||||
|
||||
|
||||
* It is always useful to have a small selection of hand tools, such as screwdrivers and needle-nose pliers, for repairing computer hardware. A lack of spare parts can limit opportunities for much beyond basic troubleshooting, although stripping usable components from dead computers can be worthwhile.
|
||||
|
||||
|
||||
|
||||
### Planning for the software you'll find
|
||||
|
||||
* You can assume that most of the computer systems you'll find will be some incarnation of Microsoft Windows. You can expect that many will not be officially licensed, not be getting updates nor security patches, and are infected by multiple viruses and other malware.
|
||||
|
||||
|
||||
* You can also expect that most application software will be proprietary and much of it will be unlicensed and lag behind the latest release versions. These conditions are depressing for open source enthusiasts, but this is the world as it is, rather than the world we would like it to be.
|
||||
|
||||
|
||||
* It is wise to view any Windows system you do not control as potentially infected with viruses and malware. It's good practice to reserve a USB thumb drive for files you'll use with these Windows systems; this means that if (or more likely when) that thumb drive becomes infected, you can just reformat it at no cost.
|
||||
|
||||
|
||||
* Bring copies of free antivirus software such as [AVG][1] and [Avast][2], including recent virus definition files for them, as well as virus removal and repair tools such as [Sophos][3] and [Hirens Boot CD][4].
|
||||
|
||||
|
||||
* Trying to keep software current on machines that have no or infrequent access to the internet is a challenge. This is particularly true with web browsers, which tend to go through rapid release cycles. My preferred web browser is Mozilla Firefox and having a copy of the latest release is useful.
|
||||
|
||||
|
||||
* Bring repair discs for a selection of recent Microsoft operating systems, and make sure that includes service packs for Windows and Microsoft Office.
|
||||
|
||||
|
||||
|
||||
### Planning for the software you'll bring
|
||||
|
||||
There's no better way to convey the advantages of open source software than by showing it to people. Here are some recommendations along that line.
|
||||
|
||||
* When gathering software to take with you, make sure you get the full offline installation option. Often, the most prominently displayed download links on websites are stubs that require internet access to download the components. They won't work if you're in an area with poor (or no) internet service.
|
||||
|
||||
|
||||
* Also, make sure to get the 32-bit and 64-bit versions of the software. While 32-bit machines are becoming less common, you may encounter them and it's best to be prepared.
|
||||
|
||||
|
||||
* Having a [bootable version of Linux][5] is vital for two reasons. First, it can be used to rescue data from a seriously damaged Windows machine. Second, it's an easy way to show off Linux without installing it on someone's machine. [Linux Mint][6] is my favorite distro for this purpose, because the graphical interface (GUI) is similar enough to Windows to appear non-threatening and it includes a good range of application software.
|
||||
|
||||
|
||||
* Bring the widest selection of open source applications you can—you can't count on being able to download something from the internet.
|
||||
|
||||
|
||||
* When possible, bring your open source software library as portable applications that will run without installing them. One of the many ways to mess up those Windows machines is to install and uninstall a lot of software, and using portable apps gets around this problem. Many open source applications, including Libre Office, GIMP, Blender, and Inkscape, have portable app versions for Windows.
|
||||
|
||||
|
||||
* It's smart to bring a supply of blank disks so you can give away copies of your open source software stash on media that is a bit more secure than a USB thumb drive.
|
||||
|
||||
|
||||
* Don't forget to bring programs and resources related to projects you will be working on. (For example, most of my overseas work involves tuition, mentoring, and skills transfer, so I usually add a selection of open source software tools for creating learning resources.)
|
||||
|
||||
|
||||
|
||||
### Your turn
|
||||
|
||||
There are many variables and surprises when doing IT work in undeveloped areas. If you have suggestions—for programs I've missed or tips that I didn't cover—please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/it-toolkit-remote
|
||||
|
||||
作者:[Peter Cheer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/petercheer
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.avg.com/en-gb/free-antivirus-download
|
||||
[2]: https://www.avast.com/en-gb/free-antivirus-download
|
||||
[3]: https://www.sophos.com/en-us/products/free-tools/virus-removal-tool.aspx
|
||||
[4]: https://www.hiren.info/
|
||||
[5]: https://opensource.com/article/18/7/getting-started-etcherio
|
||||
[6]: https://linuxmint.com/
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Small Scale Scrum vs. Large Scale Scrum)
|
||||
[#]: via: (https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum)
|
||||
[#]: author: (Agnieszka Gancarczyk https://opensource.com/users/agagancarczyk)
|
||||
|
||||
Small Scale Scrum vs. Large Scale Scrum
|
||||
======
|
||||
We surveyed individual members of small and large scrum teams. Here are some key findings.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_crowdvsopen.png?itok=AFjno_8v)
|
||||
|
||||
Following the publication of the [Small Scale Scrum framework][1], we wanted to collect feedback on how teams in our target demographic (consultants, open source developers, and students) work and what they value. With this first opportunity to inspect, adapt, and help shape the next stage of Small Scale Scrum, we decided to create a survey to capture some data points and begin to validate some of our assumptions and hypotheses.
|
||||
|
||||
**[[Download the Introduction to Small Scale Scrum guide]][2]**
|
||||
|
||||
Our reasons for using the survey were multifold, but chief among them were the global distribution of teams, the small local data sample available in our office, access to customers, and the industry’s utilization of surveys (e.g., the [Stack Overflow Developer Survey 2018][3], [HackerRank 2018 Developer Skills Report][4], and [GitLab 2018 Global Developer Report][5]).
|
||||
|
||||
The scrum’s iterative process was used to facilitate the creation of the survey shown below:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/survey_process.png)
|
||||
|
||||
[The survey][6], which we invite you to complete, consisted of 59 questions and was distributed at a local college ([Waterford Institute of Technology][7]) and to Red Hat's consultancy and engineering teams. Our initial data was gathered from the responses of 54 individuals spread across small and large scrum teams, who were asked about their experiences with agile within their teams.
|
||||
|
||||
Here are the main results and initial findings of the survey:
|
||||
|
||||
* A full 96% of survey participants practice a form of agile, work in distributed teams, think scrum principles help them reduce development complexity, and believe agile contributes to the success of their projects.
|
||||
|
||||
* Only 8% of survey participants belong to small (one- to three-person) teams, and 10 out of 51 describe their typical project as short-lived (three months or less).
|
||||
|
||||
* The majority of survey participants were software engineers, but quality engineers (QE), project managers (PM), product owners (PO), and scrum masters were also represented.
|
||||
|
||||
* Scrum master, PO, and team member are typical roles in projects.
|
||||
|
||||
* Nearly half of survey respondents work on, or are assigned to, more than one project at the same time.
|
||||
|
||||
* Almost half of projects are customer/value-generating vs. improvement/not directly value-generating or unclassified.
|
||||
|
||||
* Almost half of survey participants said that their work is clarified sometimes or most of the time and estimated before development with extensions available sometimes or most of the time. They said asking for clarification of work items is the team’s responsibility.
|
||||
|
||||
* Almost half of survey respondents said they write tests for their code, and they adhere to best coding practices, document their code, and get their code reviewed before merging.
|
||||
|
||||
* Almost all survey participants introduce bugs to the codebase, which are prioritized by them, the team, PM, PO, team lead, or the scrum master.
|
||||
|
||||
* Participants ask for help and mentoring when a task is complex. They also take on additional roles on their projects when needed, including business analyst, PM, QE, and architect, and they sometimes find changing roles difficult.
|
||||
|
||||
* When changing roles on a daily basis, individuals feel they lose one to two hours on average, but they still complete their work on time most of the time.
|
||||
|
||||
* Most survey participants use scrum (65%), followed by hybrid (18%) and Kanban (12%). This is consistent with results of [VersionOne’s State of Agile Report][8].
|
||||
|
||||
* The daily standup, sprint, sprint planning and estimating, backlog grooming, and sprint retrospective are among the top scrum ceremonies and principles followed, and team members do preparation work before meetings.
|
||||
|
||||
* The majority of sprints (62%) are three weeks long, followed by two-week sprints (26%), one-week sprints (6%), and four-week sprints (4%). Two percent of participants are not using sprints due to strict release and update timings, with all activities organized and planned around those dates.
|
||||
|
||||
* Teams use [planning poker][9] to estimate (storypoint) user stories. User stories contain acceptance criteria.
|
||||
|
||||
* Teams create and use a [Definition of Done][10] mainly in respect to features and determining completion of user stories.
|
||||
|
||||
* The majority of teams don’t have or use a [Definition of Ready][11] to ensure that user stories are actionable, testable, and clear.
|
||||
|
||||
* Unit, integration, functional, automated, performance/load, and acceptance tests are commonly used.
|
||||
|
||||
* Overall collaboration between team members is considered high, and team members use various communication channels.
|
||||
|
||||
* The majority of survey participants spend more than four hours weekly in meetings, including face-to-face meetings, web conferences, and email communication.
|
||||
|
||||
* The majority of customers are considered large, and half of them understand and follow scrum principles.
|
||||
|
||||
* Customers respect “no deadlines” most of the time and sometimes help create user stories and participate in sprint planning, sprint review and demonstration, sprint retrospective, and backlog review and refinement.
|
||||
|
||||
* Only 27% of survey participants know their customers have a high level of satisfaction with the adoption of agile, while the majority (58%) don’t know this information at all.
|
||||
|
||||
|
||||
|
||||
|
||||
These survey results will inform the next stage of our data-gathering exercise. We will apply Small Scale Scrum to real-world projects, and the guidance obtained from the survey will help us gather key data points as we move toward version 2.0 of Small Scale Scrum. If you want to help, take our [survey][6]. If you have a project to which you'd like to apply Small Scale Scrum, please get in touch.
|
||||
|
||||
[Download the Introduction to Small Scale Scrum guide][2]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
|
||||
|
||||
作者:[Agnieszka Gancarczyk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/agagancarczyk
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/small-scale-scrum-framework
|
||||
[2]: https://opensource.com/downloads/small-scale-scrum
|
||||
[3]: https://insights.stackoverflow.com/survey/2018/
|
||||
[4]: https://research.hackerrank.com/developer-skills/2018/
|
||||
[5]: https://about.gitlab.com/developer-survey/2018/
|
||||
[6]: https://docs.google.com/forms/d/e/1FAIpQLScAXf52KMEiEzS68OOIsjLtwZJto_XT7A3b9aB0RhasnE_dEw/viewform?c=0&w=1
|
||||
[7]: https://www.wit.ie/
|
||||
[8]: https://explore.versionone.com/state-of-agile/versionone-12th-annual-state-of-agile-report
|
||||
[9]: https://en.wikipedia.org/wiki/Planning_poker
|
||||
[10]: https://www.scruminc.com/definition-of-done/
|
||||
[11]: https://www.scruminc.com/definition-of-ready/
|
@ -1,104 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why is no one signing their emails?)
|
||||
[#]: via: (https://arp242.net/weblog/signing-emails.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
Why is no one signing their emails?
|
||||
======
|
||||
|
||||
|
||||
I received this email a while ago:
|
||||
|
||||
> Queensland University of Technology sent you an Amazon.com Gift Card!
|
||||
>
|
||||
> You’ve received an Amazon.com gift card! You’ll need the claim code below to place your order.
|
||||
>
|
||||
> Happy shopping!
|
||||
|
||||
Queensland University of Technology? Why would they send me anything? Where is that even? Australia right? That’s the other side of the world! Looks like spam!
|
||||
|
||||
It did look pretty good for spam, so I took a second look. And a very close third look, and then I decided it wasn’t spam.
|
||||
|
||||
I was still confused why they sent me this. A week later I remembered: half a year prior I had done an interview regarding my participation on Stack Overflow for someone’s paper; she was studying somewhere in Australia – presumably the university of Queensland. No one had ever mentioned anything about a reward or Amazon gift card so I wasn’t expecting it. It’s a nice bonus though.
|
||||
|
||||
Here’s the thing: I’ve spent several years professionally developing email systems; I administered email servers; I read all the relevant RFCs. While there are certainly people who are more knowledgable, I know more about email than the vast majority of the population. And I still had to take a careful look to verify the email wasn’t a phishing attempt.
|
||||
|
||||
And I’m not even a target; I’m just this guy, you know? [Ask John Podesta what it is to be targeted][1]:
|
||||
|
||||
> SecureWorks concluded Fancy Bear had sent Podesta an email on March 19, 2016, that had the appearance of a Google security alert, but actually contained a misleading link—a strategy known as spear-phishing. [..] The email was initially sent to the IT department as it was suspected of being a fake but was described as “legitimate” in an e-mail sent by a department employee, who later said he meant to write “illegitimate”.
|
||||
|
||||
Yikes! If I was even remotely high-profile I’d be crazy paranoid about all emails I get.
|
||||
|
||||
It seems to me that there is a fairly easy solution to verify the author of an email: sign it with a digital signature; PGP is probably the best existing solution right now. I don’t even care about encryption here, just signing to prevent phishing.
|
||||
|
||||
PGP has a well-deserved reputation for being hard, but that’s only for certain scenarios. A lot of the problems/difficulties stem from trying to accommodate the “random person A emails random person B” use case, but this isn’t really what I care about here. “Large company with millions of users sends thousands of emails daily” is a very different use case.
|
||||
|
||||
Much of the key exchange/web-of-trust dilemma can be bypassed by shipping email clients with keys for large organisations (PayPal, Google, etc.) baked in, like browsers do with some certificates. Even just publishing your key on your website (or, if you’re a bank, in local branches etc.) is already a lot better than not signing anything at all. Right now there seems to be a catch-22 scenario: clients don’t implement better support as very few people are using PGP, while very few companies bother signing their emails with PGP because so few people can benefit from it.
|
||||
|
||||
On the end-user side, things are also not very hard; we’re just conceded with validating signatures, nothing more. For this purpose PGP isn’t hard. It’s like verifying your Linux distro’s package system: all of them sign their packages (usually with PGP) and they get verified on installation, but as an end-user I never see it unless something goes wrong.
|
||||
|
||||
There are many aspects of PGP that are hard to set up and manage, but verifying signatures isn’t one of them. The user-visible part of this is very limited. Remember, no one is expected to sign their own emails: just verify that the signature is correct (which the software will do). Conceptually, it’s not that different from verifying a handwritten signature.
|
||||
|
||||
DKIM and SPF already exist and are useful, but limited. All both do is verify that an email which claims to be from `amazon.com` is really from `amazon.com`. If I send an email from `mail.amazon-account-security.com` or `amazonn.com` then it just verifies that it was sent from that domain, not that it was sent from the organisation Amazon.
|
||||
|
||||
What I am proposing is subtly different. In my (utopian) future every serious organisation will sign their email with PGP (just like every serious organisation uses https). Then every time I get an email which claims to be from Amazon I can see it’s either not signed, or not signed by a key I know. If adoption is broad enough we can start showing warnings such as “this email wasn’t signed, do you want to trust it?” and “this signature isn’t recognized, yikes!”
|
||||
|
||||
There’s also S/MIME, which has better client support and which works more or less the same as HTTPS: you get a certificate from the Certificate Authority Mafia, sign your email with it, and presto. The downside of this is that anyone can sign their emails with a valid key, which isn’t necessarily telling you much (just because haxx0r.ru has a certificate doesn’t mean it’s trustworthy).
|
||||
|
||||
Is it perfect? No. I understand stuff like key exchange is hard and that baking in keys isn’t perfect. Is it better? Hell yes. Would probably have avoided Podesta and the entire Democratic Party a lot of trouble. Here’s a “[sophisticated new phishing campaign][2]” targeted at PayPal users. How “sophisticated”? Well, by not having glaring stupid spelling errors, duplicating the PayPal layout in emails, duplicating the PayPal login screen, a few forms, and getting an SSL certificate. Truly, the pinnacle of Computer Science.
|
||||
|
||||
Okay sure, they spent some effort on it; but any nincompoop can do it; if this passes for “sophisticated phishing” where “it’s easy to see how users could be fooled” then the bar is pretty low.
|
||||
|
||||
I can’t recall receiving a single email from any organisation that is signed (much less encrypted). Banks, financial services, utilities, immigration services, governments, tax services, voting registration, Facebook, Twitter, a zillion websites … all happily sent me emails hoping I wouldn’t consider them spam and hoping I wouldn’t confuse a phishing email for one of theirs.
|
||||
|
||||
Interesting experiment: send invoices for, say, a utility bill for a local provider. Just copy the layout from the last utility bill you received. I’ll bet you’ll make more money than freelancing on UpWork if you do it right.
|
||||
|
||||
I’ve been intending to write this post for years, but never quite did because “surely not everyone is stupid?” I’m not a crypto expert, so perhaps I’m missing something here, but I wouldn’t know what. Let me know if I am.
|
||||
|
||||
In the meanwhile PayPal is attempting to solve the problem by publishing [articles which advise you to check for spelling errors][3]. Okay, it’s good advice, but do we really want this to be the barrier between an attacker and your money? Or Russian hacking groups and your emails? Anyone can sign any email with any key, but “unknown signature” warnings strike me as a lot better UX than “carefully look for spelling errors or misleading domain names”.
|
||||
|
||||
The way forward is to make it straight-forward to implement signing in apps and then just do it as a developer, whether asked or not; just as you set up https whether you’re asked or not. I’ll write a follow-up with more technical details later, assuming no one pokes holes in this article :-)
|
||||
|
||||
#### Response to some feedback
|
||||
|
||||
Some response to some feedback that I couldn’t be bothered to integrate in the article’s prose:
|
||||
|
||||
* “You can’t trust webmail with crypto!”
|
||||
If you use webmail then you’re already trusting the email provider with everything. What’s so bad with trusting them to verify a signature, too?
|
||||
|
||||
We’re not communicating state secrets over encrypted email here; we’re just verifying the signature on “PayPal sent you a message, click here to view it”-kind of emails.
|
||||
|
||||
* “Isn’t this ignoring the massive problem that is key management?”
|
||||
Yes, it’s hard problem; but that doesn’t mean it can’t be done. I already mentioned some possible solutions in the article.
|
||||
|
||||
|
||||
|
||||
|
||||
**Footnotes**
|
||||
|
||||
1. We could make something better; PGP contians a lot of cruft. But for now PGP is “good enough”.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/signing-emails.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Podesta_emails#Data_theft
|
||||
[2]: https://www.eset.com/us/about/newsroom/corporate-blog/paypal-users-targeted-in-sophisticated-new-phishing-campaign/
|
||||
[3]: https://www.paypal.com/cs/smarthelp/article/how-to-spot-fake,-spoof,-or-phishing-emails-faq2340
|
@ -1,84 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why feedback, not metrics, is critical to DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/3/devops-feedback-not-metrics)
|
||||
[#]: author: (Ranjith Varakantam (Red Hat) https://opensource.com/users/ranjith)
|
||||
|
||||
Why feedback, not metrics, is critical to DevOps
|
||||
======
|
||||
|
||||
Metrics can tell you some things, but not the most important things about how your products and teams are doing.
|
||||
|
||||
![CICD with gears][1]
|
||||
|
||||
Most managers and agile coaches depend on metrics over feedback from their teams, users, and even customers. In fact, quite a few use feedback and metrics synonymously, where they present feedback from teams or customers as a bunch of numbers or a graphical representation of those numbers. This is not only unfortunate, but it can be misleading as it presents only part of the story and not the entire truth.
|
||||
|
||||
When it comes to two critical factors—how we manage or guide our teams and how we operate and influence the product that our teams are developing—few exceptional leaders and teams get it right. For one thing, it has become very easy to get your hands on data and metrics. Furthermore, it's still hard to get real feedback from teams and users. It requires significant investments and energy, and unless everyone understands the critical need for it, getting and giving feedback tends to be a low priority and keeps getting pushed to the back burner.
|
||||
|
||||
### How to manage and guide teams
|
||||
|
||||
With the acceptance of agile, a lot of teams have put a ridiculously high value on metrics, such as velocity, burndown charts, cumulative flow diagram (CFD), etc., instead of the value delivered by the team in each iteration or deployment. The focus is on the delivery or output produced without a clear understanding of how this relates to personal performance or implications for the project, product, or service.
|
||||
|
||||
A few managers and agile coaches even abuse the true spirit of agile by misusing metrics to chastise or even penalize their teams. Instead of creating an environment of empowerment, they are slipping back into the command-and-control method where metrics are used to bully teams into submission.
|
||||
|
||||
In our group, the best managers have weekly one-on-one meetings with every team member. These meetings not only give them a real pulse on team morale but also a profound understanding of the project and the decisions being made to move it forward. This weekly feedback loop also helps the team members communicate technical, functional, and even personal issues better. As a result, the team is much more cohesive in understanding the overall project needs and able to make decisions promptly.
|
||||
|
||||
These leaders also skip levels—reaching out to team members two or three levels below them—and have frequent conversations with other group members who interact with their teams on a regular basis. These actions give the managers a holistic picture, which they couldn't get if they relied on feedback from one manager or lead, and help them identify any blind spots the leads and managers may have.
|
||||
|
||||
These one-on-one meetings effectively transform a manager into a coach who has a close understanding of every team member. Like a good coach, these managers both give and receive feedback from the team members regarding the product, decision-making transparency, places where the team feels management is lagging, and areas that are being ignored. This empowers the teams by giving them a voice, not once in a while in an annual meeting or an annual survey, but every week. This is the level where DevOps teams should be in order to deliver their commitments successfully.
|
||||
|
||||
This demands significant investments of time and energy, but the results more than justify it. The alternative is to rely on metrics and annual reviews and surveys, which has failed miserably. Unless we begin valuing feedback over metrics, we will keep seeing the metrics we want to see but failed projects and miserable team morale.
|
||||
|
||||
### Influencing projects and product development
|
||||
|
||||
We see similar behavior on the project or product side, with too few conversations with the users and developers and too much focus on metrics. Let's take the example of a piece of software that was released to the community or market, and the primary success metric is the number of downloads or installs. This can be deceiving for several reasons:
|
||||
|
||||
1. This product was packaged into another piece of software that users installed; even though the users are not even aware of your product's existence or purpose, it is still counted as a win and something the user needs.
|
||||
|
||||
2. The marketing team spent a huge budget promoting the product—and even offered an incentive to developers to download it. The _incentive_ drives the downloads, not user need or desire, but the metric is still considered a measure of success.
|
||||
|
||||
3. Software updates are counted as downloads, even when they are involuntary updates pushed rather than initiated by the user. This keeps bumping up the number, even though the user might have used it once, a year ago, for a specific task.
|
||||
|
||||
|
||||
|
||||
|
||||
In these cases, the user automatically becomes a metric that's used to report how well the product is doing, just based on the fact it was downloaded and it's accepting updates, regardless of whether the user likes or uses the software. Instead, we should be focusing on actual usage of the product and the feedback these users have to offer us, rather than stopping short at the download numbers.
|
||||
|
||||
The same holds true for SaaS products—instead of counting the number of signups, we should look at how often users use the product or service. Signups by themselves have little meaning, especially to the DevOps team where the focus is on getting constant feedback and striving for continuous improvements.
|
||||
|
||||
### Gathering feedback
|
||||
|
||||
So, why do we rely on metrics so much? My guess is they are easy to collect, and the marketing team is more interested in getting the product into the users' hands than evaluating how it is fairing. Unless the engineering team invests quite a bit of time in collecting feedback with tracing, which captures how often the program is executed and which components are used most often, it can be difficult to collect feedback.
|
||||
|
||||
A big advantage of working in an open source community is that we first release the piece of software into a community where we can get feedback. Most open source enthusiasts take the time to log issues and bugs based on their experience with the product. If we can supplement this data with tracing, the team has an accurate record of how the product is used.
|
||||
|
||||
Open as many channels of communication as possible–chat, email, Twitter, etc.—and allow users to choose their feedback channel.
|
||||
|
||||
A few DevOps teams have integrated blue-green deployments, A/B testing, and canary releases to shorten the feedback loop. Setting up these frameworks it is not a trivial matter and calls for a huge upfront investment and constant updates to make them seamlessly work. But once everything is set up and data begins to flow, the team can act upon real feedback based on real user interactions with every new bit of software released.
|
||||
|
||||
Most agile practitioners and lean movement activists push for a build-deploy-measure-learn cycle, and for this to happen, we need to collect feedback in addition to metrics. It might seem expensive and time consuming in the short term, but in the long run, it is a foolproof way of learning.
|
||||
|
||||
### Proof that feedback pays off
|
||||
|
||||
Whether it pertains to people or projects, it pays to rely on first-hand feedback rather than metrics, which are seldom interpreted in impartial ways. We have ample proof of this in other industries, where companies such as Zappos and the Virgin Group have done wonders for their business simply by listening to their customers. There is no reason we cannot follow suit, especially those of us working in open source communities.
|
||||
|
||||
Feedback is the only effective way we can uncover our blind spots. Metrics are not of much help in this regard, as we can't find out what's wrong when we are dealing with unknowns. Blind spots can create serious gaps between reality and what we think we know. Feedback not only encourages continuous improvement, whether it's on a personal or a product level, but the simple act of listening and acting on it increases trust and loyalty.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/devops-feedback-not-metrics
|
||||
|
||||
作者:[Ranjith Varakantam (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ranjith
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
|
@ -1,148 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 steps to stop ethical debt in AI product development)
|
||||
[#]: via: (https://opensource.com/article/19/3/ethical-debt-ai-product-development)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
6 steps to stop ethical debt in AI product development
|
||||
======
|
||||
|
||||
Machine bias in artificial intelligence is a known and unavoidable problem—but it is not unmanageable.
|
||||
|
||||
![old school calculator][1]
|
||||
|
||||
It's official: artificial intelligence (AI) isn't the unbiased genius we want it to be.
|
||||
|
||||
Alphabet (Google's parent company) used its latest annual report [to warn][2] that ethical concerns about its products might hurt future revenue. Entrepreneur Joy Buolamwini established the [Safe Face Pledge][3] to prevent abuse of facial analysis technology.
|
||||
|
||||
And years after St. George's Hospital Medical School in London was found to have used AI that inadvertently [screened out qualified female candidates][4], Amazon scrapped a recruiting tool last fall after machine learning (ML) specialists found it [doing the same thing][5].
|
||||
|
||||
We've learned the hard way that technologies built with AI are biased like people. Left unchecked, the datasets used to train such products can pose [life-or-death consequences][6] for their end users.
|
||||
|
||||
For example, imagine a self-driving car that can't recognize commands from people with certain accents. If the dataset used to train the technology powering that car isn't exposed to enough voice variations and inflections, it risks not recognizing all its users as fully human.
|
||||
|
||||
Here's the good news: Machine bias in AI is unavoidable—but it is _not_ unmanageable. Just like product and development teams work to reduce technical debt, you can [reduce the risk of ethical debt][7] as well.
|
||||
|
||||
Here are six steps that your technical team can start taking today:
|
||||
|
||||
### 1\. Document your priorities upfront
|
||||
|
||||
Reducing ethical debt within your product will require you to answer two key questions in the product specification phase:
|
||||
|
||||
* Which methods of fairness will you use?
|
||||
* How will you prioritize them?
|
||||
|
||||
|
||||
|
||||
If your team is building a product based on ML, it's not enough to reactively fix bugs or pull products from shelves. Instead, answer these questions [in your tech spec][8] so that they're included from the start of your product lifecycle.
|
||||
|
||||
### 2\. Train your data under fairness constraints
|
||||
|
||||
This step is tough because when you try to control or eliminate both direct and indirect bias, you'll find yourself in a Catch-22.
|
||||
|
||||
If you train exclusively on non-sensitive attributes, you eliminate direct discrimination but introduce or reinforce indirect bias.
|
||||
|
||||
However, if you train separate classifiers for each sensitive feature, you reintroduce direct discrimination.
|
||||
|
||||
Another challenge is that detection can occur only after you've trained your model. When this occurs, the only recourse is to scrap the model and retrain it from scratch.
|
||||
|
||||
To reduce these risks, don't just measure average strengths of acceptance and rejection across sensitive groups. Instead, use limits to determine what is or isn't included in the model you're training. When you do this, discrimination tests are expressed as restrictions and limitations on the learning process.
|
||||
|
||||
### 3\. Monitor your datasets throughout the product lifecycle
|
||||
|
||||
Developers build training sets based on data they hope the model will encounter. But many don't monitor the data their creations receive from the real world.
|
||||
|
||||
ML products are unique in that they're continuously taking in data. New data allows the algorithms powering these products to keep refining their results.
|
||||
|
||||
But such products often encounter data in deployment that differs from what they were trained on in production. It's also not uncommon for the algorithm to be updated without the model itself being revalidated.
|
||||
|
||||
This risk will decrease if you appoint someone to monitor the source, history, and context of the data in your algorithm. This person should conduct continuous audits to find unacceptable behavior.
|
||||
|
||||
Bias should be reduced as much as possible while maintaining an acceptable level of accuracy, as defined in the product specification. If unacceptable biases or behaviors are detected, the model should be rolled back to an earlier state prior to the first time you saw bias.
|
||||
|
||||
### 4\. Use tagged training data
|
||||
|
||||
We live in a world with trillions of images and videos at our fingertips, but most neural networks can't use this data for one reason: Most of it isn't tagged.
|
||||
|
||||
Tagging refers to which classes are present in an image and their locations. When you tag an image, you share which classes are present and where they're located.
|
||||
|
||||
This sounds simple—until you realize how much work it would take to draw shapes around every single person in a photo of a crowd or a box around every single person on a highway.
|
||||
|
||||
Even if you succeeded, you might rush your tagging and draw your shapes sloppily, leading to a poorly trained neural network.
|
||||
|
||||
The good news is that more products are coming to market so they can decrease the time and cost of tagging.
|
||||
|
||||
As one example, [Brain Builder][9] is a data annotation product from Neurala that uses open source frameworks like TensorFlow and Caffe. Its goal is to help users [manage and annotate their training data][10]. It also aims to bring diverse class examples to datasets—another key step in data training.
|
||||
|
||||
### 5\. Use diverse class examples
|
||||
|
||||
Training data needs positive and negative examples of classes. If you want specific classes of objects, you need negative examples as well. This (hopefully) mimics the data that the algorithm will encounter in the wild.
|
||||
|
||||
Consider the example of “homes” within a dataset. If the algorithm contains only images of homes in North America, it won't know to recognize homes in Japan, Morocco, or other international locations. Its concept of a “home” is thus limited.
|
||||
|
||||
Neurala warns, "Most AI applications require thousands of images to be tagged, and since data tagging cost is proportional to the time spent tagging, this step alone often costs tens to hundreds of thousands of dollars per project."
|
||||
|
||||
Luckily, 2018 saw a strong increase in the number of open source AI datasets. Synced has a helpful [roundup of 10 datasets][11]—from multi-label images to semantic parsing—that were open sourced last year. If you're looking for datasets by industry, GitHub [has a longer list][12].
|
||||
|
||||
### 6\. Focus on the subject, not the context
|
||||
|
||||
Tech leaders in monitoring ML datasets should aim to understand how the algorithm classifies data. That's because AI sometimes focuses on irrelevant attributes that are shared by several targets in the training set.
|
||||
|
||||
Let's start by looking at the biased training set below. Wolves were tagged standing in snow, but the model wasn't shown images of dogs. So, when dogs were introduced, the model started tagging them as wolves because both animals were standing in snow. In this case, the AI put too much emphasis on the context (a snowy backdrop).
|
||||
|
||||
![Wolves in snow][13]
|
||||
|
||||
Source: [Gartner][14] (full research available for clients)
|
||||
|
||||
By contrast, here is a training set from Brain Builder that is focused on the subject dogs. When monitoring your own training set, make sure the AI is giving more weight to the subject of each image. If you saw an image classifier state that one of the dogs below is a wolf, you would need to know which aspects of the input led to this misclassification. This is a sign to check your training set and confirm that the data is accurate.
|
||||
|
||||
![Dogs training set][15]
|
||||
|
||||
Source: [Brain Builder][16]
|
||||
|
||||
Reducing ethical debt isn't just the “right thing to do”—it also reduces _technical_ debt. Since programmatic bias is so tough to detect, working to reduce it, from the start of your lifecycle, will save you the need to retrain models from scratch.
|
||||
|
||||
This isn't an easy or perfect job; tech teams will have to make tradeoffs between fairness and accuracy. But this is the essence of product management: compromises based on what's best for the product and its end users.
|
||||
|
||||
Strategy is the soul of all strong products. If your team includes measures of fairness and algorithmic priorities from the start, you'll sail ahead of your competition.
|
||||
|
||||
* * *
|
||||
|
||||
_Lauren Maffeo will present_ [_Erase Unconscious Bias From Your AI Datasets_][17] _at[DrupalCon][18] in Seattle, April 8-12, 2019._
|
||||
|
||||
* * *
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/ethical-debt-ai-product-development
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/math_money_financial_calculator_colors.jpg?itok=_yEVTST1 (old school calculator)
|
||||
[2]: https://www.yahoo.com/news/google-warns-rise-ai-may-181710642.html?soc_src=mail&soc_trk=ma
|
||||
[3]: https://www.safefacepledge.org/
|
||||
[4]: https://futurism.com/ai-bias-black-box
|
||||
[5]: https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G
|
||||
[6]: https://opensource.com/article/18/10/open-source-classifiers-ai-algorithms
|
||||
[7]: https://thenewstack.io/tech-ethics-new-years-resolution-dont-build-software-you-will-regret/
|
||||
[8]: https://eng.lyft.com/awesome-tech-specs-86eea8e45bb9
|
||||
[9]: https://www.neurala.com/tech
|
||||
[10]: https://www.roboticsbusinessreview.com/ai/brain-builder-neurala-video-annotation/
|
||||
[11]: https://medium.com/syncedreview/2018-in-review-10-open-sourced-ai-datasets-696b3b49801f
|
||||
[12]: https://github.com/awesomedata/awesome-public-datasets
|
||||
[13]: https://opensource.com/sites/default/files/uploads/wolves_in_snow.png (Wolves in snow)
|
||||
[14]: https://www.gartner.com/doc/3889586/control-bias-eliminate-blind-spots
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ai_ml_canine_recognition.png (Dogs training set)
|
||||
[16]: https://www.neurala.com/
|
||||
[17]: https://events.drupal.org/seattle2019/sessions/erase-unconscious-bias-your-ai-datasets
|
||||
[18]: https://events.drupal.org/seattle2019
|
@ -1,100 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hello World Marketing (or, How I Find Good, Boring Software))
|
||||
[#]: via: (https://theartofmachinery.com/2019/03/19/hello_world_marketing.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Hello World Marketing (or, How I Find Good, Boring Software)
|
||||
======
|
||||
|
||||
Back in 2001 Joel Spolsky wrote his classic essay [“Good Software Takes Ten Years. Get Used To it”][1]. Nothing much has changed since then: software is still taking around a decade of development to get good, and the industry is still getting used to that fact. Unfortunately, the industry has investors who want to see hockey stick growth rates on software that’s a year old or less. The result is an antipattern I like to call “Hello World Marketing”. Once you start to notice it, you see it everywhere, and it’s a huge red flag when choosing software tools.
|
||||
|
||||
|
||||
Of course, by “Hello World”, I’m referring to the programmer’s traditional first program: the one that just displays the message “Hello World”. The aim isn’t to make a useful program; it’s to make a minimal starting point.
|
||||
|
||||
Hello World Marketing is about doing the same thing, but pretending that it’s useful. You’re supposed to be distracted into admiring how neatly a tool solves trivial problems, and forget about features you’ll need in real applications. HWM emphasises what can be done in the first five minutes, and downplays what you might need after several months. HWMed software is optimised for looking good in demos, and sounding exciting in blog posts and presentations.
|
||||
|
||||
For a good example, see Nemil Dalal’s [great series of articles about the early marketing for MongoDB][2]. Notice the heavy use of hackathons, and that a lot of the marketing was about how “SQL looks like COBOL”. Now, I can criticise SQL, too, but if `SELECT` and `WHERE` are serious problems for an application, there are already hundreds of solutions like [SQLAlchemy][3] and [LINQ][4] — solutions that don’t compromise on more advanced features of traditional databases. On the other hand, if you were wondering about those advanced features, you could read vomity-worthy, hand-wavey pieces like “[Living in the post-transactional database future][5]”.
|
||||
|
||||
### How I Find Good, Boring Software
|
||||
|
||||
Obviously, one way to avoid HWM is to stick to software that’s much more than ten years old, and has a good reputation. But sometimes that’s not possible because the tools for a problem only came out during the last decade. Also, sometimes newer tools really do bring new benefits.
|
||||
|
||||
However, it’s much harder to rely on reputation for newer software because “good reputation” often just means “popular”, which often just means “current fad”. Thankfully, there’s a simple and effective trick to avoid being dazzled by hype: just look elsewhere. Instead of looking at the marketing for the core features, look at the things that are usually forgotten. Here are the kinds of things I look at:
|
||||
|
||||
#### Backups and Disaster Recovery
|
||||
|
||||
Backup support is both super important and regularly an afterthought.
|
||||
|
||||
The minimum viable product is full data dump/import functionality, but longer term it’s nice to have things like incremental backups. Some vendors will try to tell you to just copy the data files from disk, but this isn’t guaranteed to give you a consistent snapshot if the software is running live.
|
||||
|
||||
There’s no point backing up data if you can’t restore it, and restoration is the difficult part. Yet many people never test the restoration (until they actually need it). About five years ago I was working with a team that had started using a new, cutting-edge, big-data database. The database looked pretty good, but I suggested we do an end-to-end test of the backup support. We loaded a cluster with one of the multi-terabyte datasets we had, did a backup, wiped the data in the cluster and then tried to restore it. Turns out we were the first people to actually try to restore a dataset of that size — the backup “worked”, but the restoration caused the cluster to crash and burn. We filed a bug report with the original database developers and they fixed it.
|
||||
|
||||
Backup processes that work on small test datasets but fail on large production datasets is a recurring theme. I always recommend testing on production-sized datasets, and testing again as production data grows.
|
||||
|
||||
For batch jobs, a related concept is restartability. If you’re copying large amounts of data from one place to another, and the job gets interrupted in the middle, what happens? Can you keep going from the middle? Alternatively, can you safely retry by starting from the beginning?
|
||||
|
||||
#### Configuration
|
||||
|
||||
A lot of HWMed software can only be configured using a GUI or web UI because that’s what’s obvious and looks good in demos and docs. For one thing, this usually means there’s no good way to back up or restore the configuration. So if a team of people use a shared instance over a year or so, forget about trying to restore it if (or when) it breaks. It’s also much more work to keep multiple deployments consistent (e.g., for dev, testing and prod environments) using separate GUIs. In practice, it just doesn’t happen.
|
||||
|
||||
I prefer a well-commented config file for software I deploy, if nothing else because it can be checked into source control, and I know I can reproduce the deployment using nothing but what’s checked into source control. If something is configured using a UI, I look for a config export/import function. Even then, that feature is often an afterthought, and often imcomplete, so it’s worth testing if it’s possible to deploy the software without ever needing to manually tweak something in the UI.
|
||||
|
||||
There seems to be a recent trend for software to be configured using a REST API instead. Honestly, this is the worst of both config files and GUI-based config, and most of the time people end up using [hacky ways to put the config into a file instead][6].
|
||||
|
||||
#### Upgrades
|
||||
|
||||
Life would be much easier if everything were static; software upgrade support makes everything more complicated. It’s also not usually shown in demos, so the first upgrade often ends the honeymoon with shiny, new software.
|
||||
|
||||
For HA distributed systems, you’ll need support for graceful shutdown and a certain amount of forward _and_ backwards compatibility (because you’ll have multiple versions running during upgrades). It’s a common mistake to forget about downgrade support.
|
||||
|
||||
Distributed systems are simpler when components have independent replicas that don’t communicate with each other. Anything with clustering (or, worse, consensus algorithms) is often extra tricky to upgrade, and worth testing.
|
||||
|
||||
Things that support horizontal scaling don’t necessarily support rescaling without downtime. This is especially true whenever sharding is involved because live resharding isn’t trivial.
|
||||
|
||||
Here’s a story from a certain popular container app platform. Demos showed how easy it was to launch an app on the platform, and then showed how easy it was to scale it to multiple replicas. What they didn’t show was the upgrade process: When you pushed a new version of your app, the first thing the platform did was _shut down all running instances of it_. Then it would upload the code to a build server and start building it — meaning downtime for however long the build took, plus the time needed to roll out the new version (if it worked). This problem has been fixed in newer releases of the platform.
|
||||
|
||||
#### Security
|
||||
|
||||
Even if software has no built-in access control, all-or-nothing access control is easy to implement (e.g., using a reverse proxy with HTTP basic auth). The harder problem is fine-grained access control. Sometimes you don’t care, but in some environments it makes a big difference to what features you can even use.
|
||||
|
||||
Some immature software has a quick-and-dirty implementation of user-based access control, typically with a GUI for user management. For everything except the core business tool, this isn’t very useful. For human users, every project I’ve worked on has either been with a small team that just shared a single username/password, or with a large team that wanted integration with OpenID Connect, or LDAP, or whatever centralised single-sign-on (SSO) system was used by the organisation. No one wants to manually manage credentials for every tool, every time someone joins or leaves. Similarly, credentials for applications or other non-human users are better generated using an automatable approach — like a config file or API.
|
||||
|
||||
Immature implementations of access control are often missing anything like user groups, but managing permissions at the user level is a time waster. Some SSO integrations only integrate users, not groups, which is a “so close yet so far” when it comes to avoiding permissions busywork.
|
||||
|
||||
#### Others
|
||||
|
||||
I talked about ignoring the hype, but there’s one good signal you can get from the marketing: whether the software is branded as “enterprise” software. Enterprise software is normally bought by someone other than the end user, so it’s usually pleasant to buy but horrible to use. The only exceptions I know of are enterprise versions of normal consumer software, and enterprise software that the buyer will also have to use. Be warned: even if a company sells enterprise software alongside consumer software, there’s no guarantee that they’re just different versions of the same product. Often they’ll be developed by separate teams with different priorities.
|
||||
|
||||
A lot of the stuff in this post can be checked just by skimming through the documentation. If a tool stores data, but the documentation doesn’t mention backups, there probably isn’t any backup suppport. Even if there is and it’s just not documented, that’s not exactly a good sign either. So, sure, documentation quality is worth evaluating by itself. On the other hand, sometimes the documentation is better than the product, so I never trust a tool until I’ve actually tried it out.
|
||||
|
||||
When I first saw Python, I knew that it was a terrible programming language because of the way it used whitespace indentation. Yeah, that was stupid. Later on I learned that 1) the syntax wasn’t a big deal, especially when I’m already indenting C-like languages in the same way, and 2) a lot of practical problems can be solved just by gluing libraries together with a few dozen lines of Python, and that was really useful. We often have strong opinions about syntax that are just prejudice. Syntax can matter, but it’s less important than how the tool integrates with the rest of the system.
|
||||
|
||||
### Weighing Pros and Cons
|
||||
|
||||
You never need to do deep analysis to detect the most overhyped products. Just check a few of these things and they’ll fail spectacularly.
|
||||
|
||||
Even with software that looks solid, I still like to do more tests before entrusting a serious project with it. That’s not because I’m looking for excuses to nitpick and use my favourite tool instead. New tools often really do bring new benefits. But it’s much better to understand the pros and cons of new software, and to use it because the pros outweigh the cons, not because of how slick the Hello World demo is.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/03/19/hello_world_marketing.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.joelonsoftware.com/2001/07/21/good-software-takes-ten-years-get-used-to-it/
|
||||
[2]: https://www.nemil.com/mongo/
|
||||
[3]: https://www.sqlalchemy.org/
|
||||
[4]: https://msdn.microsoft.com/en-us/library/bb308959.aspx
|
||||
[5]: https://www.mongodb.com/post/36151042528/post-transactional-future
|
||||
[6]: /2017/07/15/use_terraform_with_vault.html
|
@ -1,97 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing changes in open source projects)
|
||||
[#]: via: (https://opensource.com/article/19/3/managing-changes-open-source-projects)
|
||||
[#]: author: (Ben Cotton (Red Hat, Community Moderator) https://opensource.com/users/bcotton)
|
||||
|
||||
Managing changes in open source projects
|
||||
======
|
||||
|
||||
Here's how to create a visible change process to support the community around an open source project.
|
||||
|
||||
![scrabble letters: "time for change"][1]
|
||||
|
||||
Why bother having a process for proposing changes to your open source project? Why not just let people do what they're doing and merge the features when they're ready? Well, you can certainly do that if you're the only person on the project. Or maybe if it's just you and a few friends.
|
||||
|
||||
But if the project is large, you might need to coordinate how some of the changes land. Or, at the very least, let people know a change is coming so they can adjust if it affects the parts they work on. A visible change process is also helpful to the community. It allows them to give feedback that can improve your idea. And if nothing else, it lets people know what's coming so that they can get excited, and maybe get you a little bit of coverage on Opensource.com or the like. Basically, it's "here's what I'm going to do" instead of "here's what I did," and it might save you some headaches as you scramble to QA right before your release.
|
||||
|
||||
So let's say I've convinced you that having a change process is a good idea. How do you build one?
|
||||
|
||||
**[Watch my talk on this topic]**
|
||||
<https://www.youtube.com/embed/cVV1K3Junkc>
|
||||
|
||||
### Right-size your change process
|
||||
|
||||
Before we start talking about what a change process looks like, I want to make it very clear that this is not a one-size-fits-all situation. The smaller your project is—mainly in the number of contributors—the less process you'll probably need. As [Richard Hackman says][2], the number of communication channels in a team goes up exponentially with the number of people on the team. In community-driven projects, this becomes even more complicated as people come and go, and even your long-time contributors might not be checking in every day. So the change process consolidates those communication channels into a single area where people can quickly check to see if they care and then get back to whatever it is they do.
|
||||
|
||||
At one end of the scale, there's the command-line Twitter client I maintain. The change process there is, "I pick something I want to work on, probably make a Git branch for it but I might forget that, merge it, and tag a release when I'm out of stuff that I can/want to do." At the other end is Fedora. Fedora isn't really a single project; it's a program of related projects that mostly move in the same direction. More than 200 people a week touch Fedora in a technical sense: spec file maintenance, build submission, etc. This doesn't even include the untold number of people who are working on the upstreams. And these upstreams all have their own release schedules and their own processes for how features land and when. Nobody can keep up with everything on their own, so the change process brings important changes to light.
|
||||
|
||||
### Decide who needs to review changes
|
||||
|
||||
One of the first things you need to consider when putting together a change process for your community is: "who needs to review changes?" This isn't necessarily approving the changes; we'll come to that shortly. But are there people who should take a look early in the process? Maybe your release engineering or infrastructure teams need to review them to make sure they don't require changes to build infrastructure. Maybe you have a legal review process to make sure licenses are in order. Or maybe you just have a change wrangler who looks to make sure all the required information is included. Or you may choose to do none of these and have change proposals go directly to the community.
|
||||
|
||||
But this brings up the next step. Do you want full community feedback or only a select group to provide feedback? My preference, and what we do in Fedora, is to publish changes to the community before they're approved. But the structure of your community may fit a model where some approval body signs off on the change before it is sent to the community as an announcement.
|
||||
|
||||
### Determine who approves changes
|
||||
|
||||
Even if you lack any sort of organizational structure, someone ends up approving changes. This should reflect the norms and values of your community. The simplest form of approval is the person who proposed the change implements it. Easy peasy! In loosely organized communities, that might work. Fully democratic communities might put it to a community-wide vote. If a certain number or proportion of members votes in favor, the change is approved. Other communities may give that power to an individual or group. They could be responsible for the entire project or certain subsections.
|
||||
|
||||
In Fedora, change approval is the role of the Fedora Engineering Steering Committee (FESCo). This is a nine-person body elected by community members. This gives the community the ability to remove members who are not acting in the best interests of the project but also enables relatively quick decisions without large overhead.
|
||||
|
||||
In much of this article, I am simply presenting information, but I'm going to take a moment to be opinionated. For any project with a significant contributor base, a model where a small body makes approval decisions is the right approach. A pure democracy can be pretty messy. People who may have no familiarity with the technical ramifications of a change will be able to cast a binding vote. And that process is subject to "brigading," where someone brings along a large group of otherwise-uninterested people to support their position. Think about what it might look like if someone proposed changing the default text editor. Would the decision process be rational?
|
||||
|
||||
### Plan how to enforce changes
|
||||
|
||||
The other advantage of having a defined approval body is it can mediate conflicts between changes. What happens if two proposed changes conflict? Or if a change turns out to have a negative impact? Someone needs to have the authority to say "this isn't going in after all" or make sure conflicting changes are brought into agreement. Your QA team and processes will be a part of this, and maybe they're the ones who will make the final call.
|
||||
|
||||
It's relatively straightforward to come up with a plan if a change doesn't work as expected or is incomplete by the deadline. If you require a contingency plan as part of the change process, then you implement that plan. The harder part is: what happens if someone makes a change that doesn't go through your change process? Here's a secret your friendly project manager doesn't want you to know: you can't force people to go through your process, particularly in community projects.
|
||||
|
||||
So if something sneaks in and you don't discover it until you have a release candidate, you have a couple of options: you can let it in, or you can get someone to forcibly remove it. In either case, you'll have someone who is very unhappy. Either the person who made the change, because you kicked their work out, or the people who had to deal with the breakage it caused. (If it snuck in without anyone noticing, then it's probably not that big of a deal.)
|
||||
|
||||
The answer, in either case, is going to be social pressure to follow the process. Processes are sometimes painful to follow, but a well-designed and well-maintained process will give more benefit than it costs. In this case, the benefit may be identifying breakages sooner or giving other developers a chance to take advantage of new features that are offered. And it can help prevent slips in the release schedule or hero effort from your QA team.
|
||||
|
||||
### Implement your change process
|
||||
|
||||
So we've thought about the life of a change proposal in your project. Throw in some deadlines that make sense for your release cadence, and you can now come up with the policy—but how do you implement it?
|
||||
|
||||
First, you'll want to identify the required information for a change proposal. At a minimum, I'd suggest the following. You may have more requirements depending on the specifics of what your community is making and how it operates.
|
||||
|
||||
* Name and summary
|
||||
* Benefit to the project
|
||||
* Scope
|
||||
* Owner
|
||||
* Test plan
|
||||
* Dependencies and impacts
|
||||
* Contingency plan
|
||||
|
||||
|
||||
|
||||
You'll also want one or several change wranglers. These aren't gatekeepers so much as shepherds. They may not have the ability to approve or reject change proposals, but they are responsible for moving the proposals through the process. They check the proposal for completeness, submit it to the appropriate bodies, make appropriate announcements, etc. You can have people wrangle their own changes, but this can be a specialized task and will generally benefit from a dedicated person who does this regularly, instead of making community members do it less frequently.
|
||||
|
||||
And you'll need some tooling to manage these changes. This could be a wiki page, a kanban board, a ticket tracker, something else, or a combination of these. But basically, you want to be able to track their state and provide some easy reporting on the status of changes. This makes it easier to know what is complete, what is at risk, and what needs to be deferred to a later release. You can use whatever works best for you, but in general, you'll want to minimize copy-and-pasting and maximize scriptability.
|
||||
|
||||
### Remember to iterate
|
||||
|
||||
Your change process may seem perfect. Then people will start using it. You'll discover edge cases you didn't consider. You'll find that the community hates a certain part of it. Decisions that were once valid will become invalid over time as technology and society change. In Fedora, our Features process revealed itself to be ambiguous and burdensome, so it was refined into the [Changes][3] process we use today. Even though the Changes process is much better than its predecessor, we still adjust it here and there to make sure it's best meeting the needs of the community.
|
||||
|
||||
When designing your process, make sure it fits the size and values of your community. Consider who gets a voice and who gets a vote in approving changes. Come up with a plan for how you'll handle incomplete changes and other exceptions. Decide who will guide the changes through the process and how they'll be tracked. And once you design your change policy, write it down in a place that's easy for your community to find so that they can follow it. But most of all, remember that the process is here to serve the community; the community is not here to serve the process.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/managing-changes-open-source-projects
|
||||
|
||||
作者:[Ben Cotton (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/change_words_scrabble_letters.jpg?itok=mbRFmPJ1 (scrabble letters: "time for change")
|
||||
[2]: https://hbr.org/2009/05/why-teams-dont-work
|
||||
[3]: https://fedoraproject.org/wiki/Changes/Policy
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to transition into a Developer Relations career)
|
||||
[#]: via: (https://opensource.com/article/19/3/developer-relations-career)
|
||||
[#]: author: (Mary Thengvall https://opensource.com/users/marygrace-0)
|
||||
|
||||
How to transition into a Developer Relations career
|
||||
======
|
||||
|
||||
Combine your love for open source software with your love for the community in a way that allows you to invest your time in both.
|
||||
|
||||
![][1]
|
||||
|
||||
Let's say you've found an open source project you really love and you want to do more than just contribute. Or you love coding, but you don't want to spend the rest of your life interacting more with your computer than you do with people. How do you combine your love for open source software with your love for the community in a way that allows you to invest your time in both?
|
||||
|
||||
### Developer Relations: A symbiotic relationship
|
||||
|
||||
Enter community management, or as it's more commonly called in the tech industry, Developer Relations (DevRel for short). The goal of DevRel is, at its core, to empower developers. From writing content and creating documentation to supporting local meetups and bubbling up developer feedback internally, everything that a Developer Relations professional does on a day-to-day basis is for the benefit of the community. That's not to say that it doesn't benefit the company as well! After all, as Developer Relations professionals understand, if the community succeeds, so will the company. It's the best kind of symbiotic relationship!
|
||||
|
||||
These hybrid roles have been around since shortly after the open source and free software movements started, but the Developer Relations industry—and the Developer Advocate role, in particular—have exploded over the past few years. So what is Developer Relations exactly? Let's start by defining "community" so that we're all on the same page:
|
||||
|
||||
> **Community:** A group of people who not only share common principles, but also develop and share practices that help individuals in the group thrive.
|
||||
|
||||
This could be a group of people who have gathered around an open source project, a particular topic such as email, or who are all in a similar job function—the DevOps community, for instance.
|
||||
|
||||
As I mentioned, the role of a DevRel team is to empower the community by building up, encouraging, and amplifying the voice of the community members. While this will look slightly different at every company, depending on its goals, priorities, and direction, there are a few themes that are consistent throughout the industry.
|
||||
|
||||
1. **Listen:** Before making any plans or goals, take the time to listen.
|
||||
* _Listen to your company stakeholders:_ What do they expect of your team? What do they think you should be responsible for? What metrics are they accustomed to? And what business needs do they care most about?
|
||||
* _Listen to your customer community:_ What are customers' biggest pain points with your product? Where do they struggle with onboarding? Where does the documentation fail them?
|
||||
* _Listen to your product's technical audience:_ What problems are they trying to solve? What could be done to make their work life easier? Where do they get their content? What technological advances are they most excited about?
|
||||
|
||||
|
||||
2. **Gather information**
|
||||
Based on these answers, you can start making your plan. Find the overlapping areas where you can make your product a better fit for the larger technical audience and also make it easier for your customers to use. Figure out what content you can provide that not only answers your community's questions but also solves problems for your company's stakeholders. Learn about the areas where your co-workers struggle and see where your strengths can supplement those needs.
|
||||
|
||||
|
||||
3. **Make connections**
|
||||
Above all, community managers are responsible for making connections within the community as well as between community members and coworkers. These connections, or "DevRel qualified leads," are what ultimately shows the business value of a community manager's work. By making connections between community members Marie and Bob, who are both interested in the latest developments in Python, or between Marie and your coworker Phil, who's responsible for developer-focused content on your website, you're making your community a valuable source of information for everyone around you.
|
||||
|
||||
|
||||
|
||||
By getting to know your technical community, you become an expert on what customer needs your product can meet. With great power comes great responsibility. As the expert, you are now responsible for advocating internally for those needs, and you have the potential to make a big difference for your community.
|
||||
|
||||
### Getting started
|
||||
|
||||
So now what? If you're still with me, congratulations! You might just be a good fit for a Community Manager or Developer Advocate role. I'd encourage you to take community work for a test drive and see if you like the pace and the work. There's a lot of context switching and moving around between tasks, which can be a bit of an adjustment for some folks.
|
||||
|
||||
Volunteer to write a blog post for your marketing team (or for [Opensource.com][2]) or help out at an upcoming conference. Apply to speak at a local meetup or offer to advise on a few technical support cases. Get to know your community members on a deeper level.
|
||||
|
||||
Above all, Community Managers are 100% driven by a passion for building technical communities and bringing people together. If that resonates with you, it may be time for a career change!
|
||||
|
||||
I love talking to professionals that help others grow through community and Developer Relations practices. Don't hesitate to [reach out to me][3] if you have any questions or send me a [DM on Twitter][4].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/developer-relations-career
|
||||
|
||||
作者:[Mary Thengvall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/marygrace-0
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI
|
||||
[2]: https://opensource.com/how-submit-article
|
||||
[3]: https://www.marythengvall.com/about
|
||||
[4]: http://twitter.com/mary_grace
|
@ -1,115 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Continuous response: The essential process we're ignoring in DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/3/continuous-response-devops)
|
||||
[#]: author: (Randy Bias https://opensource.com/users/randybias)
|
||||
|
||||
Continuous response: The essential process we're ignoring in DevOps
|
||||
======
|
||||
You probably practice CI and CD, but if you aren't thinking about
|
||||
continuous response, you aren't really doing DevOps.
|
||||
![CICD with gears][1]
|
||||
|
||||
Continuous response (CR) is an overlooked link in the DevOps process chain. The two other major links—[continuous integration (CI) and continuous delivery (CD)][2]—are well understood, but CR is not. Yet, CR is the essential element of follow-through required to make customers happy and fulfill the promise of greater speed and agility. At the heart of the DevOps movement is the need for greater velocity and agility to bring businesses into our new digital age. CR plays a pivotal role in enabling this.
|
||||
|
||||
### Defining CR
|
||||
|
||||
We need a crisp definition of CR to move forward with breaking it down. To put it into context, let's revisit the definitions of continuous integration (CI) and continuous delivery (CD). Here are Gartner's definitions when I wrote this them down in 2017:
|
||||
|
||||
> [Continuous integration][3] is the practice of integrating, building, testing, and delivering functional software on a scheduled, repeatable, and automated basis.
|
||||
>
|
||||
> Continuous delivery is a software engineering approach where teams keep producing valuable software in short cycles while ensuring that the software can be reliably released at any time.
|
||||
|
||||
I propose the following definition for CR:
|
||||
|
||||
> Continuous response is a practice where developers and operators instrument, measure, observe, and manage their deployed software looking for changes in performance, resiliency, end-user behavior, and security posture and take corrective actions as necessary.
|
||||
|
||||
We can argue about whether these definitions are 100% correct. They are good enough for our purposes, which is framing the definition of CR in rough context so we can understand it is really just the last link in the chain of a holistic cycle.
|
||||
|
||||
![The holistic DevOps cycle][4]
|
||||
|
||||
What is this multi-colored ring, you ask? It's the famous [OODA Loop][5]. Before continuing, let's touch on what the OODA Loop is and why it's relevant to DevOps. We'll keep it brief though, as there is already a long history between the OODA Loop and DevOps.
|
||||
|
||||
#### A brief aside: The OODA Loop
|
||||
|
||||
At the heart of core DevOps thinking is using the OODA Loop to create a proactive process for evolving and responding to changing environments. A quick [web search][6] makes it easy to learn the long history between the OODA Loop and DevOps, but if you want the deep dive, I highly recommend [The Tao of Boyd: How to Master the OODA Loop][7].
|
||||
|
||||
Here is the "evolved OODA Loop" presented by John Boyd:
|
||||
|
||||
![OODA Loop][8]
|
||||
|
||||
The most important thing to understand about the OODA Loop is that it's a cognitive process for adapting to and handling changing circumstances.
|
||||
|
||||
The second most important thing to understand about the OODA Loop is, since it is a thought process that is meant to evolve, it depends on driving feedback back into the earlier parts of the cycle as you iterate.
|
||||
|
||||
As you can see in the diagram above, CI, CD, and CR are all their own isolated OODA Loops within the overall DevOps OODA Loop. The key here is that each OODA Loop is an evolving thought process for how test, release, and success are measured. Simply put, those who can execute on the OODA Loop fastest will win.
|
||||
|
||||
Put differently, DevOps wants to drive speed (executing the OODA Loop faster) combined with agility (taking feedback and using it to constantly adjust the OODA Loop). This is why CR is a vital piece of the DevOps process. We must drive production feedback into the DevOps maturation process. The DevOps notion of Culture, Automation, Measurement, and Sharing ([CAMS][9]) partially but inadequately captures this, whereas CR provides a much cleaner continuation of CI/CD in my mind.
|
||||
|
||||
### Breaking CR down
|
||||
|
||||
CR has more depth and breadth than CI or CD. This is natural, given that what we're categorizing is the post-deployment process by which our software is taking a variety of actions from autonomic responses to analytics of customer experience. I think, when it's broken down, there are three key buckets that CR components fall into. Each of these three areas forms a complete OODA Loop; however, the level of automation throughout the OODA Loop varies significantly.
|
||||
|
||||
The following table will help clarify the three areas of CR:
|
||||
|
||||
CR Type | Purpose | Examples
|
||||
---|---|---
|
||||
Real-time | Autonomics for availability and resiliency | Auto-scaling, auto-healing, developer-in-the-loop automated responses to real-time failures, automated root-cause analysis
|
||||
Analytic | Feature/fix pipeline | A/B testing, service response times, customer interaction models
|
||||
Predictive | History-based planning | Capacity planning, hardware failure prediction models, cost-basis analysis
|
||||
|
||||
_Real-time CR_ is probably the best understood of the three. This kind of CR is where our software has been instrumented for known issues and can take an immediate, automated response (autonomics). Examples of known issues include responding to high or low demand (e.g., elastic auto-scaling), responding to expected infrastructure resource failures (e.g., auto-healing), and responding to expected distributed application failures (e.g., circuit breaker pattern). In the future, we will see machine learning (ML) and similar technologies applied to automated root-cause analysis and event correlation, which will then provide a path towards "no ops" or "zero ops" operational models.
|
||||
|
||||
_Analytic CR_ is still the most manual of the CR processes. This kind of CR is focused primarily on observing end-user experience and providing feedback to the product development cycle to add features or fix existing functionality. Examples of this include traditional A/B website testing, measuring page-load times or service-response times, post-mortems of service failures, and so on.
|
||||
|
||||
_Predictive CR_ , due to the resurgence of AI and ML, is one of the innovation areas in CR. It uses historical data to predict future needs. ML techniques are allowing this area to become more fully automated. Examples include automated and predictive capacity planning (primarily for the infrastructure layer), automated cost-basis analysis of service delivery, and real-time reallocation of infrastructure resources to resolve capacity and hardware failure issues before they impact the end-user experience.
|
||||
|
||||
### Diving deeper on CR
|
||||
|
||||
CR, like CI or CD, is a DevOps process supported by a set of underlying tools. CI and CD are not Jenkins, unit tests, or automated deployments alone. They are a process flow. Similarly, CR is a process flow that begins with the delivery of new code via CD, which open source tools like [Spinnaker][10] give us. CR is not monitoring, machine learning, or auto-scaling, but a diverse set of processes that occur after code deployment, supported by a variety of tools. CR is also different in two specific ways.
|
||||
|
||||
First, it is different because, by its nature, it is broader. The general software development lifecycle (SDLC) process means that most [CI/CD processes][11] are similar. However, code running in production differs from app to app or service to service. This means that CR differs as well.
|
||||
|
||||
Second, CR is different because it is nascent. Like CI and CD before it, the process and tools existed before they had a name. Over time, CI/CD became more normalized and easier to scope. CR is new, hence there is lots of room to discuss what's in or out. I welcome your comments in this regard and hope you will run with these ideas.
|
||||
|
||||
### CR: Closing the loop on DevOps
|
||||
|
||||
DevOps arose because of the need for greater service delivery velocity and agility. Essentially, DevOps is an extension of agile software development practices to an operational mindset. It's a direct response to the flexibility and automation possibilities that cloud computing affords. However, much of the thinking on DevOps to date has focused on deploying the code to production and ends there. But our jobs don't end there. As professionals, we must also make certain our code is behaving as expected, we are learning as it runs in production, and we are taking that learning back into the product development process.
|
||||
|
||||
This is where CR lives and breathes. DevOps without CR is the same as saying there is no OODA Loop around the DevOps process itself. It's like saying that operators' and developers' jobs end with the code being deployed. We all know this isn't true. Customer experience is the ultimate measurement of our success. Can people use the software or service without hiccups or undue friction? If not, we need to fix it. CR is the final link in the DevOps chain that enables delivering the truest customer experience.
|
||||
|
||||
If you aren't thinking about continuous response, you aren't doing DevOps. Share your thoughts on CR, and tell me what you think about the concept and the definition.
|
||||
|
||||
* * *
|
||||
|
||||
_This article is based on[The Essential DevOps Process We're Ignoring: Continuous Response][12], which originally appeared on the Cloudscaling blog under a [CC BY 4.0][13] license and is republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/continuous-response-devops
|
||||
|
||||
作者:[Randy Bias][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/randybias
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
|
||||
[2]: https://opensource.com/article/18/8/what-cicd
|
||||
[3]: https://www.gartner.com/doc/3187420/guidance-framework-continuous-integration-continuous
|
||||
[4]: https://opensource.com/sites/default/files/uploads/holistic-devops-cycle-smaller.jpeg (The holistic DevOps cycle)
|
||||
[5]: https://en.wikipedia.org/wiki/OODA_loop
|
||||
[6]: https://www.google.com/search?q=site%3Ablog.b3k.us+ooda+loop&rlz=1C5CHFA_enUS730US730&oq=site%3Ablog.b3k.us+ooda+loop&aqs=chrome..69i57j69i58.8660j0j4&sourceid=chrome&ie=UTF-8#q=devops+ooda+loop&*
|
||||
[7]: http://www.artofmanliness.com/2014/09/15/ooda-loop/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/ooda-loop-2-1.jpg (OODA Loop)
|
||||
[9]: https://itrevolution.com/devops-culture-part-1/
|
||||
[10]: https://www.spinnaker.io
|
||||
[11]: https://opensource.com/article/18/12/cicd-tools-sysadmins
|
||||
[12]: http://cloudscaling.com/blog/devops/the-essential-devops-process-were-ignoring-continuous-response/
|
||||
[13]: https://creativecommons.org/licenses/by/4.0/
|
@ -1,77 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why do organizations have open secrets?)
|
||||
[#]: via: (https://opensource.com/open-organization/19/3/open-secrets-bystander-effect)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger/users/maryjo)
|
||||
|
||||
Why do organizations have open secrets?
|
||||
======
|
||||
Everyone sees something, but no one says anything—that's the bystander
|
||||
effect. And it's damaging your organizational culture.
|
||||
![][1]
|
||||
|
||||
[The five characteristics of an open organization][2] must work together to ensure healthy and happy communities inside our organizations. Even the most transparent teams, departments, and organizations require equal doses of additional open principles—like inclusivity and collaboration—to avoid dysfunction.
|
||||
|
||||
The "open secrets" phenomenon illustrates the limitations of transparency when unaccompanied by additional open values. [A recent article in Harvard Business Review][3] explored the way certain organizational issues—widely apparent but seemingly impossible to solve—lead to discomfort in the workforce. Authors Insiya Hussain and Subra Tangirala performed a number of studies, and found that the more people in an organization who knew about a particular "secret," be it a software bug or a personnel issue, the less likely any one person would be to report the issue or otherwise _do_ something about it.
|
||||
|
||||
Hussain and Tangirala explain that so-called "open secrets" are the result of a [bystander effect][4], which comes into play when people think, "Well, if _everyone_ knows, surely _I_ don't need to be the one to point it out." The authors mention several causes of this behavior, but let's take a closer look at why open secrets might be circulating in your organization—with an eye on what an open leader might do to [create a safe space for whistleblowing][5].
|
||||
|
||||
### 1\. Fear
|
||||
|
||||
People don't want to complain about a known problem only to have their complaint be the one that initiates the quality assurance, integrity, or redress process. What if new information emerges that makes their report irrelevant? What if they are simply _wrong_?
|
||||
|
||||
At the root of all bystander behavior is fear—fear of repercussions, fear of losing reputation or face, or fear that the very thing you've stood up against turns out to be a non-issue for everyone else. Going on record as "the one who reported" carries with it a reputational risk that is very intimidating.
|
||||
|
||||
The first step to ensuring that your colleagues report malicious behavior, code, or _whatever_ needs reporting is to create a fear-free workplace. We're inundated with the idea that making a mistake is bad or wrong. We're taught that we have to "protect" our reputations. However, the qualities of a good and moral character are _always_ subjective.
|
||||
|
||||
_Tip for leaders_ : Reward courage and strength every time you see it, regardless of whether you deem it "necessary." For example, if in a meeting where everyone except one person agrees on something, spend time on that person's concerns. Be patient and kind in helping that person change their mind, and be open minded about that person being able to change yours. Brains work in different ways; never forget that one person might have a perspective that changes the lay of the land.
|
||||
|
||||
### 2\. Policies
|
||||
|
||||
Usually, complaint procedures and policies are designed to ensure fairness towards all parties involved in the complaint. Discouraging false reporting and ensuring such fairness in situations like these is certainly a good idea. But policies might actually deter people from standing up—because a victim might be discouraged from reporting an experience if the formal policy for reporting doesn't make them feel protected. Standing up to someone in a position of power and saying "Your behavior is horrid, and I'm not going to take it" isn't easy for anyone, but it's particularly difficult for marginalized groups.
|
||||
|
||||
The "open secrets" phenomenon illustrates the limitations of transparency when unaccompanied by additional open values.
|
||||
|
||||
To ensure fairness to all parties, we need to adjust for victims. As part of making the decision to file a report, a victim will be dealing with a variety of internal fears. They'll wonder what might happen to their self-worth if they're put in a situation where they have to talk to someone about their experience. They'll wonder if they'll be treated differently if they're the one who stands up, and how that will affect their future working environments and relationships. Especially in a situation involving an open secret, asking a victim to be strong is asking them to have to trust that numerous other people will back them up. This fear shouldn't be part of their workplace experience; it's just not fair.
|
||||
|
||||
Remember that if one feels responsible for a problem (e.g., "Crap, that's _my code_ that's bringing down the whole server!"), then that person might feel fear at pointing out the mistake. _The important thing is dealing with the situation, not finding someone to blame._ Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.
|
||||
|
||||
_Tip for leaders_ : Make sure your team's or organization's policy regarding complaints makes anonymous reporting possible. Asking a victim to "go on record" puts them in the position of having to defend their perspective. If they feel they're the victim of harassment, they're feeling as if they are harassed _and_ being asked to defend their experience. This means they're doing double the work of the perpetrator, who only has to defend themselves.
|
||||
|
||||
### 3\. Marginalization
|
||||
|
||||
Women, LGBTQ people, racial minorities, people with physical disabilities, people who are neuro-atypical, and other marginalized groups often find themselves in positions that them feel routinely dismissed, disempowered, disrespected—and generally dissed. These feelings are valid (and shouldn't be too surprising to anyone who has spent some time looking at issues of diversity and inclusion). Our emotional safety matters, and we tend to be quite protective of it—even if it means letting open secrets go unaddressed.
|
||||
|
||||
Marginalized groups have enough worries weighing on them, even when they're _not_ running the risk of damaging their relationships with others at work. Being seen and respected in both an organization and society more broadly is difficult enough _without_ drawing potentially negative attention.
|
||||
|
||||
Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.
|
||||
|
||||
Luckily, in recent years attitudes towards marginalized groups have become visible, and we as a society have begun to talk about our experiences as "outliers." We've also come to realize that marginalized groups aren't actually "outliers" at all; we can thank the colorful, beautiful internet for that.
|
||||
|
||||
_Tip for leaders_ : Diversity and inclusion plays a role in dispelling open secrets. Make sure your diversity and inclusion practices and policies truly encourage a diverse workplace.
|
||||
|
||||
### Model the behavior
|
||||
|
||||
The best way to create a safe workplace and give people the ability to call attention to pervasive problems found within it is to _model the behaviors that you want other people to display_. Dysfunction occurs in cultures that don't pay attention to and value the principles upon which they are built. In order to discourage bystander behavior, transparent, inclusive, adaptable and collaborative communities must create policies that support calling attention to open secrets and then empathetically dealing with whatever the issue may be.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/3/open-secrets-bystander-effect
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger/users/maryjo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_secret_ingredient_520x292.png?itok=QbKzJq-N
|
||||
[2]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[3]: https://hbr.org/2019/01/why-open-secrets-exist-in-organizations
|
||||
[4]: https://www.psychologytoday.com/us/basics/bystander-effect
|
||||
[5]: https://opensource.com/open-organization/19/2/open-leaders-whistleblowers
|
@ -1,145 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Codecademy vs. The BBC Micro)
|
||||
[#]: via: (https://twobithistory.org/2019/03/31/bbc-micro.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
Codecademy vs. The BBC Micro
|
||||
======
|
||||
|
||||
In the late 1970s, the computer, which for decades had been a mysterious, hulking machine that only did the bidding of corporate overlords, suddenly became something the average person could buy and take home. An enthusiastic minority saw how great this was and rushed to get a computer of their own. For many more people, the arrival of the microcomputer triggered helpless anxiety about the future. An ad from a magazine at the time promised that a home computer would “give your child an unfair advantage in school.” It showed a boy in a smart blazer and tie eagerly raising his hand to answer a question, while behind him his dim-witted classmates look on sullenly. The ad and others like it implied that the world was changing quickly and, if you did not immediately learn how to use one of these intimidating new devices, you and your family would be left behind.
|
||||
|
||||
In the UK, this anxiety metastasized into concern at the highest levels of government about the competitiveness of the nation. The 1970s had been, on the whole, an underwhelming decade for Great Britain. Both inflation and unemployment had been high. Meanwhile, a series of strikes put London through blackout after blackout. A government report from 1979 fretted that a failure to keep up with trends in computing technology would “add another factor to our poor industrial performance.”[1][1] The country already seemed to be behind in the computing arena—all the great computer companies were American, while integrated circuits were being assembled in Japan and Taiwan.
|
||||
|
||||
In an audacious move, the BBC, a public service broadcaster funded by the government, decided that it would solve Britain’s national competitiveness problems by helping Britons everywhere overcome their aversion to computers. It launched the _Computer Literacy Project_, a multi-pronged educational effort that involved several TV series, a few books, a network of support groups, and a specially built microcomputer known as the BBC Micro. The project was so successful that, by 1983, an editor for BYTE Magazine wrote, “compared to the US, proportionally more of Britain’s population is interested in microcomputers.”[2][2] The editor marveled that there were more people at the Fifth Personal Computer World Show in the UK than had been to that year’s West Coast Computer Faire. Over a sixth of Great Britain watched an episode in the first series produced for the _Computer Literacy Project_ and 1.5 million BBC Micros were ultimately sold.[3][3]
|
||||
|
||||
[An archive][4] containing every TV series produced and all the materials published for the _Computer Literacy Project_ was put on the web last year. I’ve had a huge amount of fun watching the TV series and trying to imagine what it would have been like to learn about computing in the early 1980s. But what’s turned out to be more interesting is how computing was _taught_. Today, we still worry about technology leaving people behind. Wealthy tech entrepreneurs and governments spend lots of money trying to teach kids “to code.” We have websites like Codecademy that make use of new technologies to teach coding interactively. One would assume that this approach is more effective than a goofy ’80s TV series. But is it?
|
||||
|
||||
### The Computer Literacy Project
|
||||
|
||||
The microcomputer revolution began in 1975 with the release of [the Altair 8800][5]. Only two years later, the Apple II, TRS-80, and Commodore PET had all been released. Sales of the new computers exploded. In 1978, the BBC explored the dramatic societal changes these new machines were sure to bring in a documentary called “Now the Chips Are Down.”
|
||||
|
||||
The documentary was alarming. Within the first five minutes, the narrator explains that microelectronics will “totally revolutionize our way of life.” As eerie synthesizer music plays, and green pulses of electricity dance around a magnified microprocessor on screen, the narrator argues that the new chips are why “Japan is abandoning its ship building, and why our children will grow up without jobs to go to.” The documentary goes on to explore how robots are being used to automate car assembly and how the European watch industry has lost out to digital watch manufacturers in the United States. It castigates the British government for not doing more to prepare the country for a future of mass unemployment.
|
||||
|
||||
The documentary was supposedly shown to the British Cabinet.[4][6] Several government agencies, including the Department of Industry and the Manpower Services Commission, became interested in trying to raise awareness about computers among the British public. The Manpower Services Commission provided funds for a team from the BBC’s education division to travel to Japan, the United States, and other countries on a fact-finding trip. This research team produced a report that cataloged the ways in which microelectronics would indeed mean major changes for industrial manufacturing, labor relations, and office work. In late 1979, it was decided that the BBC should make a ten-part TV series that would help regular Britons “learn how to use and control computers and not feel dominated by them.”[5][7] The project eventually became a multimedia endeavor similar to the _Adult Literacy Project_, an earlier BBC undertaking involving both a TV series and supplemental courses that helped two million people improve their reading.
|
||||
|
||||
The producers behind the _Computer Literacy Project_ were keen for the TV series to feature “hands-on” examples that viewers could try on their own if they had a microcomputer at home. These examples would have to be in BASIC, since that was the language (really the entire shell) used on almost all microcomputers. But the producers faced a thorny problem: Microcomputer manufacturers all had their own dialects of BASIC, so no matter which dialect they picked, they would inevitably alienate some large fraction of their audience. The only real solution was to create a new BASIC—BBC BASIC—and a microcomputer to go along with it. Members of the British public would be able to buy the new microcomputer and follow along without worrying about differences in software or hardware.
|
||||
|
||||
The TV producers and presenters at the BBC were not capable of building a microcomputer on their own. So they put together a specification for the computer they had in mind and invited British microcomputer companies to propose a new machine that met the requirements. The specification called for a relatively powerful computer because the BBC producers felt that the machine should be able to run real, useful applications. Technical consultants for the _Computer Literacy Project_ also suggested that, if it had to be a BASIC dialect that was going to be taught to the entire nation, then it had better be a good one. (They may not have phrased it exactly that way, but I bet that’s what they were thinking.) BBC BASIC would make up for some of BASIC’s usual shortcomings by allowing for recursion and local variables.[6][8]
|
||||
|
||||
The BBC eventually decided that a Cambridge-based company called Acorn Computers would make the BBC Micro. In choosing Acorn, the BBC passed over a proposal from Clive Sinclair, who ran a company called Sinclair Research. Sinclair Research had brought mass-market microcomputing to the UK in 1980 with the Sinclair ZX80. Sinclair’s new computer, the ZX81, was cheap but not powerful enough for the BBC’s purposes. Acorn’s new prototype computer, known internally as the Proton, would be more expensive but more powerful and expandable. The BBC was impressed. The Proton was never marketed or sold as the Proton because it was instead released in December 1981 as the BBC Micro, also affectionately called “The Beeb.” You could get a 16k version for £235 and a 32k version for £335.
|
||||
|
||||
In 1980, Acorn was an underdog in the British computing industry. But the BBC Micro helped establish the company’s legacy. Today, the world’s most popular microprocessor instruction set is the ARM architecture. “ARM” now stands for “Advanced RISC Machine,” but originally it stood for “Acorn RISC Machine.” ARM Holdings, the company behind the architecture, was spun out from Acorn in 1990.
|
||||
|
||||
![Picture of the BBC Micro.][9] _A bad picture of a BBC Micro, taken by me at the Computer History Museum
|
||||
in Mountain View, California._
|
||||
|
||||
### The Computer Programme
|
||||
|
||||
A dozen different TV series were eventually produced as part of the _Computer Literacy Project_, but the first of them was a ten-part series known as _The Computer Programme_. The series was broadcast over ten weeks at the beginning of 1982. A million people watched each week-night broadcast of the show; a quarter million watched the reruns on Sunday and Monday afternoon.
|
||||
|
||||
The show was hosted by two presenters, Chris Serle and Ian McNaught-Davis. Serle plays the neophyte while McNaught-Davis, who had professional experience programming mainframe computers, plays the expert. This was an inspired setup. It made for [awkward transitions][10]—Serle often goes directly from a conversation with McNaught-Davis to a bit of walk-and-talk narration delivered to the camera, and you can’t help but wonder whether McNaught-Davis is still standing there out of frame or what. But it meant that Serle could voice the concerns that the audience would surely have. He can look intimidated by a screenful of BASIC and can ask questions like, “What do all these dollar signs mean?” At several points during the show, Serle and McNaught-Davis sit down in front of a computer and essentially pair program, with McNaught-Davis providing hints here and there while Serle tries to figure it out. It would have been much less relatable if the show had been presented by a single, all-knowing narrator.
|
||||
|
||||
The show also made an effort to demonstrate the many practical applications of computing in the lives of regular people. By the early 1980s, the home computer had already begun to be associated with young boys and video games. The producers behind _The Computer Programme_ sought to avoid interviewing “impressively competent youngsters,” as that was likely “to increase the anxieties of older viewers,” a demographic that the show was trying to attract to computing.[7][11] In the first episode of the series, Gill Nevill, the show’s “on location” reporter, interviews a woman that has bought a Commodore PET to help manage her sweet shop. The woman (her name is Phyllis) looks to be 60-something years old, yet she has no trouble using the computer to do her accounting and has even started using her PET to do computer work for other businesses, which sounds like the beginning of a promising freelance career. Phyllis says that she wouldn’t mind if the computer work grew to replace her sweet shop business since she enjoys the computer work more. This interview could instead have been an interview with a teenager about how he had modified _Breakout_ to be faster and more challenging. But that would have been encouraging to almost nobody. On the other hand, if Phyllis, of all people, can use a computer, then surely you can too.
|
||||
|
||||
While the show features lots of BASIC programming, what it really wants to teach its audience is how computing works in general. The show explains these general principles with analogies. In the second episode, there is an extended discussion of the Jacquard loom, which accomplishes two things. First, it illustrates that computers are not based only on magical technology invented yesterday—some of the foundational principles of computing go back two hundred years and are about as simple as the idea that you can punch holes in card to control a weaving machine. Second, the interlacing of warp and weft threads is used to demonstrate how a binary choice (does the weft thread go above or below the warp thread?) is enough, when repeated over and over, to produce enormous variation. This segues, of course, into a discussion of how information can be stored using binary digits.
|
||||
|
||||
Later in the show there is a section about a steam organ that plays music encoded in a long, segmented roll of punched card. This time the analogy is used to explain subroutines in BASIC. Serle and McNaught-Davis lay out the whole roll of punched card on the floor in the studio, then point out the segments where it looks like a refrain is being repeated. McNaught-Davis explains that a subroutine is what you would get if you cut out those repeated segments of card and somehow added an instruction to go back to the original segment that played the refrain for the first time. This is a brilliant explanation and probably one that stuck around in people’s minds for a long time afterward.
|
||||
|
||||
I’ve picked out only a few examples, but I think in general the show excels at demystifying computers by explaining the principles that computers rely on to function. The show could instead have focused on teaching BASIC, but it did not. This, it turns out, was very much a conscious choice. In a retrospective written in 1983, John Radcliffe, the executive producer of the _Computer Literacy Project_, wrote the following:
|
||||
|
||||
> If computers were going to be as important as we believed, some genuine understanding of this new subject would be important for everyone, almost as important perhaps as the capacity to read and write. Early ideas, both here and in America, had concentrated on programming as the main route to computer literacy. However, as our thinking progressed, although we recognized the value of “hands-on” experience on personal micros, we began to place less emphasis on programming and more on wider understanding, on relating micros to larger machines, encouraging people to gain experience with a range of applications programs and high-level languages, and relating these to experience in the real world of industry and commerce…. Our belief was that once people had grasped these principles, at their simplest, they would be able to move further forward into the subject.
|
||||
|
||||
Later, Radcliffe writes, in a similar vein:
|
||||
|
||||
> There had been much debate about the main explanatory thrust of the series. One school of thought had argued that it was particularly important for the programmes to give advice on the practical details of learning to use a micro. But we had concluded that if the series was to have any sustained educational value, it had to be a way into the real world of computing, through an explanation of computing principles. This would need to be achieved by a combination of studio demonstration on micros, explanation of principles by analogy, and illustration on film of real-life examples of practical applications. Not only micros, but mini computers and mainframes would be shown.
|
||||
|
||||
I love this, particularly the part about mini-computers and mainframes. The producers behind _The Computer Programme_ aimed to help Britons get situated: Where had computing been, and where was it going? What can computers do now, and what might they do in the future? Learning some BASIC was part of answering those questions, but knowing BASIC alone was not seen as enough to make someone computer literate.
|
||||
|
||||
### Computer Literacy Today
|
||||
|
||||
If you google “learn to code,” the first result you see is a link to Codecademy’s website. If there is a modern equivalent to the _Computer Literacy Project_, something with the same reach and similar aims, then it is Codecademy.
|
||||
|
||||
“Learn to code” is Codecademy’s tagline. I don’t think I’m the first person to point this out—in fact, I probably read this somewhere and I’m now ripping it off—but there’s something revealing about using the word “code” instead of “program.” It suggests that the important thing you are learning is how to decode the code, how to look at a screen’s worth of Python and not have your eyes glaze over. I can understand why to the average person this seems like the main hurdle to becoming a professional programmer. Professional programmers spend all day looking at computer monitors covered in gobbledygook, so, if I want to become a professional programmer, I better make sure I can decipher the gobbledygook. But dealing with syntax is not the most challenging part of being a programmer, and it quickly becomes almost irrelevant in the face of much bigger obstacles. Also, armed only with knowledge of a programming language’s syntax, you may be able to _read_ code but you won’t be able to _write_ code to solve a novel problem.
|
||||
|
||||
I recently went through Codecademy’s “Code Foundations” course, which is the course that the site recommends you take if you are interested in programming (as opposed to web development or data science) and have never done any programming before. There are a few lessons in there about the history of computer science, but they are perfunctory and poorly researched. (Thank heavens for [this noble internet vigilante][12], who pointed out a particularly egregious error.) The main focus of the course is teaching you about the common structural elements of programming languages: variables, functions, control flow, loops. In other words, the course focuses on what you would need to know to start seeing patterns in the gobbledygook.
|
||||
|
||||
To be fair to Codecademy, they offer other courses that look meatier. But even courses such as their “Computer Science Path” course focus almost exclusively on programming and concepts that can be represented in programs. One might argue that this is the whole point—Codecademy’s main feature is that it gives you little interactive programming lessons with automated feedback. There also just isn’t enough room to cover more because there is only so much you can stuff into somebody’s brain in a little automated lesson. But the producers at the BBC tasked with kicking off the _Computer Literacy Project_ also had this problem; they recognized that they were limited by their medium and that “the amount of learning that would take place as a result of the television programmes themselves would be limited.”[8][13] With similar constraints on the volume of information they could convey, they chose to emphasize general principles over learning BASIC. Couldn’t Codecademy replace a lesson or two with an interactive visualization of a Jacquard loom weaving together warp and weft threads?
|
||||
|
||||
I’m banging the drum for “general principles” loudly now, so let me just explain what I think they are and why they are important. There’s a book by J. Clark Scott about computers called _But How Do It Know?_ The title comes from the anecdote that opens the book. A salesman is explaining to a group of people that a thermos can keep hot food hot and cold food cold. A member of the audience, astounded by this new invention, asks, “But how do it know?” The joke of course is that the thermos is not perceiving the temperature of the food and then making a decision—the thermos is just constructed so that cold food inevitably stays cold and hot food inevitably stays hot. People anthropomorphize computers in the same way, believing that computers are digital brains that somehow “choose” to do one thing or another based on the code they are fed. But learning a few things about how computers work, even at a rudimentary level, takes the homunculus out of the machine. That’s why the Jacquard loom is such a good go-to illustration. It may at first seem like an incredible device. It reads punch cards and somehow “knows” to weave the right pattern! The reality is mundane: Each row of holes corresponds to a thread, and where there is a hole in that row the corresponding thread gets lifted. Understanding this may not help you do anything new with computers, but it will give you the confidence that you are not dealing with something magical. We should impart this sense of confidence to beginners as soon as we can.
|
||||
|
||||
Alas, it’s possible that the real problem is that nobody wants to learn about the Jacquard loom. Judging by how Codecademy emphasizes the professional applications of what it teaches, many people probably start using Codecademy because they believe it will help them “level up” their careers. They believe, not unreasonably, that the primary challenge will be understanding the gobbledygook, so they want to “learn to code.” And they want to do it as quickly as possible, in the hour or two they have each night between dinner and collapsing into bed. Codecademy, which after all is a business, gives these people what they are looking for—not some roundabout explanation involving a machine invented in the 18th century.
|
||||
|
||||
The _Computer Literacy Project_, on the other hand, is what a bunch of producers and civil servants at the BBC thought would be the best way to educate the nation about computing. I admit that it is a bit elitist to suggest we should laud this group of people for teaching the masses what they were incapable of seeking out on their own. But I can’t help but think they got it right. Lots of people first learned about computing using a BBC Micro, and many of these people went on to become successful software developers or game designers. [As I’ve written before][14], I suspect learning about computing at a time when computers were relatively simple was a huge advantage. But perhaps another advantage these people had is shows like _The Computer Programme_, which strove to teach not just programming but also how and why computers can run programs at all. After watching _The Computer Programme_, you may not understand all the gobbledygook on a computer screen, but you don’t really need to because you know that, whatever the “code” looks like, the computer is always doing the same basic thing. After a course or two on Codecademy, you understand some flavors of gobbledygook, but to you a computer is just a magical machine that somehow turns gobbledygook into running software. That isn’t computer literacy.
|
||||
|
||||
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][15] on Twitter or subscribe to the [RSS feed][16] to make sure you know when a new post is out._
|
||||
|
||||
_Previously on TwoBitHistory…_
|
||||
|
||||
> FINALLY some new damn content, amirite?
|
||||
>
|
||||
> Wanted to write an article about how Simula bought us object-oriented programming. It did that, but early Simula also flirted with a different vision for how OOP would work. Wrote about that instead!<https://t.co/AYIWRRceI6>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [February 1, 2019][17]
|
||||
|
||||
1. Robert Albury and David Allen, Microelectronics, report (1979). [↩︎][18]
|
||||
|
||||
2. Gregg Williams, “Microcomputing, British Style”, Byte Magazine, 40, January 1983, accessed on March 31, 2019, <https://archive.org/stream/byte-magazine-1983-01/1983_01_BYTE_08-01_Looking_Ahead#page/n41/mode/2up>. [↩︎][19]
|
||||
|
||||
3. John Radcliffe, “Toward Computer Literacy,” Computer Literacy Project Achive, 42, accessed March 31, 2019, [https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards Computer Literacy.pdf][20]. [↩︎][21]
|
||||
|
||||
4. David Allen, “About the Computer Literacy Project,” Computer Literacy Project Archive, accessed March 31, 2019, <https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/history>. [↩︎][22]
|
||||
|
||||
5. ibid. [↩︎][23]
|
||||
|
||||
6. Williams, 51. [↩︎][24]
|
||||
|
||||
7. Radcliffe, 11. [↩︎][25]
|
||||
|
||||
8. Radcliffe, 5. [↩︎][26]
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2019/03/31/bbc-micro.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.05mfBL4kP8#fn:1
|
||||
[2]: tmp.05mfBL4kP8#fn:2
|
||||
[3]: tmp.05mfBL4kP8#fn:3
|
||||
[4]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/
|
||||
[5]: https://twobithistory.org/2018/07/22/dawn-of-the-microcomputer.html
|
||||
[6]: tmp.05mfBL4kP8#fn:4
|
||||
[7]: tmp.05mfBL4kP8#fn:5
|
||||
[8]: tmp.05mfBL4kP8#fn:6
|
||||
[9]: https://twobithistory.org/images/beeb.jpg
|
||||
[10]: https://twitter.com/TwoBitHistory/status/1112372000742404098
|
||||
[11]: tmp.05mfBL4kP8#fn:7
|
||||
[12]: https://twitter.com/TwoBitHistory/status/1111305774939234304
|
||||
[13]: tmp.05mfBL4kP8#fn:8
|
||||
[14]: https://twobithistory.org/2018/09/02/learning-basic.html
|
||||
[15]: https://twitter.com/TwoBitHistory
|
||||
[16]: https://twobithistory.org/feed.xml
|
||||
[17]: https://twitter.com/TwoBitHistory/status/1091148050221944832?ref_src=twsrc%5Etfw
|
||||
[18]: tmp.05mfBL4kP8#fnref:1
|
||||
[19]: tmp.05mfBL4kP8#fnref:2
|
||||
[20]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards%20Computer%20Literacy.pdf
|
||||
[21]: tmp.05mfBL4kP8#fnref:3
|
||||
[22]: tmp.05mfBL4kP8#fnref:4
|
||||
[23]: tmp.05mfBL4kP8#fnref:5
|
||||
[24]: tmp.05mfBL4kP8#fnref:6
|
||||
[25]: tmp.05mfBL4kP8#fnref:7
|
||||
[26]: tmp.05mfBL4kP8#fnref:8
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How Kubeflow is evolving without ksonnet)
|
||||
[#]: via: (https://opensource.com/article/19/4/kubeflow-evolution)
|
||||
[#]: author: (Jonathan Gershater (Red Hat) https://opensource.com/users/jgershat/users/jgershat)
|
||||
|
||||
How Kubeflow is evolving without ksonnet
|
||||
======
|
||||
There are big differences in how open source communities handle change compared to closed source vendors.
|
||||
![Chat bubbles][1]
|
||||
|
||||
Many software projects depend on modules that are run as separate open source projects. When one of those modules loses support (as is inevitable), the community around the main project must determine how to proceed.
|
||||
|
||||
This situation is happening right now in the [Kubeflow][2] community. Kubeflow is an evolving open source platform for developing, orchestrating, deploying, and running scalable and portable machine learning workloads on [Kubernetes][3]. Recently, the primary supporter of the Kubeflow component [ksonnet][4] announced that it would [no longer support][5] the software.
|
||||
|
||||
When a piece of software loses support, the decision-making process (and the outcome) differs greatly depending on whether the software is open source or closed source.
|
||||
|
||||
### A cellphone analogy
|
||||
|
||||
To illustrate the differences in how an open source community and a closed source/single software vendor proceed when a component loses support, let's use an example from hardware design.
|
||||
|
||||
Suppose you buy cellphone Model A and it stops working. When you try to get it repaired, you discover the manufacturer is out of business and no longer offering support. Since the cellphone's design is proprietary and closed, no other manufacturers can support it.
|
||||
|
||||
Now, suppose you buy cellphone Model B, it stops working, and its manufacturer is also out of business and no longer offering support. However, Model B's design is open, and another company is in business manufacturing, repairing and upgrading Model B cellphones.
|
||||
|
||||
This illustrates one difference between software written using closed and open source principles. If the vendor of a closed source software solution goes out of business, support disappears with the vendor, unless the vendor sells the software's design and intellectual property. But, if the vendor of an open source solution goes out of business, there is no intellectual property to sell. By the principles of open source, the source code is available for anyone to use and modify, under license, so another vendor can continue to maintain the software.
|
||||
|
||||
### How Kubeflow is evolving without ksonnet
|
||||
|
||||
The ramification of ksonnet's backers' decision to cease development illustrates Kubeflow's open and collaborative design process. Kubeflow's designers have several options, such as replacing ksonnet, adopting and developing ksonnet, etc. Because Kubeflow is an open source project, all options are discussed in the open on the Kubeflow mailing list. Some of the community's suggestions include:
|
||||
|
||||
> * Should we look at projects that are CNCF/Apache projects e.g. [helm][6]
|
||||
> * I would opt for back to the basics. KISS. How about plain old jsonnet + kubectl + makefile/scripts ? Thats how e.g. the coreos [prometheus operator][7] does it. It would also lower the entry barrier (no new tooling) and let vendors of k8s (gke, openshift, etc) easily build on top of that.
|
||||
> * I vote for using a simple, _programmatic_ context, be it manual jsonnet + kubectl, or simple Python scripts + Python K8s client, or any tool be can build on top of these.
|
||||
>
|
||||
|
||||
|
||||
The members of the mailing list are discussing and debating alternatives to ksonnet and will arrive at a decision to continue development. What I love about the open source way of adapting is that it's done communally. Unlike closed source software, which is often designed by one vendor, the organizations that are members of an open source project can collaboratively steer the project in the direction they best see fit. As Kubeflow evolves, it will benefit from an open, collaborative decision-making framework.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/kubeflow-evolution
|
||||
|
||||
作者:[Jonathan Gershater (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jgershat/users/jgershat
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: https://www.kubeflow.org/
|
||||
[3]: https://github.com/kubernetes
|
||||
[4]: https://ksonnet.io/
|
||||
[5]: https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-heptio-open-source-projects-to-vmware/
|
||||
[6]: https://landscape.cncf.io
|
||||
[7]: https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus
|
@ -1,72 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Making computer science curricula as adaptable as our code)
|
||||
[#]: via: (https://opensource.com/open-organization/19/4/adaptable-curricula-computer-science)
|
||||
[#]: author: (Amarachi Achonu https://opensource.com/users/amarach1/users/johnsontanner3)
|
||||
|
||||
Making computer science curricula as adaptable as our code
|
||||
======
|
||||
No two computer science students are alike—so teachers need curricula
|
||||
that are open and adaptable.
|
||||
![][1]
|
||||
|
||||
Educators in elementary computer science face a lack of adaptable curricula. Calls for more modifiable, non-rigid curricula are therefore enticing—assuming that such curricula could benefit teachers by increasing their ability to mold resources for individual classrooms and, ultimately, produce better teaching experiences and learning outcomes.
|
||||
|
||||
Our team at [CSbyUs][2] noticed this scarcity, and we've created an open source web platform to facilitate more flexible, adaptable, and tested curricula for computer science educators. The mission of the CSbyUs team has always been utilizing open source technology to improve pedagogy in computer science, which includes increasing support for teachers. Therefore, this project primarily seeks to use open source principles—and the benefits inherent in them—to expand the possibilities of modern curriculum-making and support teachers by increasing access to more adaptable curricula.
|
||||
|
||||
### Rigid, monotonous, mundane
|
||||
|
||||
Why is the lack of adaptable curricula a problem for computer science education? Rigid curricula dominates most classrooms today, primarily through monotonous and routinely distributed lesson plans. Many of these plans are developed without the capacity for dynamic use and application to different classroom atmospheres. In contrast, an _adaptable_ curriculum is one that would _account_ for dynamic and changing classroom environments.
|
||||
|
||||
An adaptable curriculum means freedom and more options for educators. This is especially important in elementary-level classrooms, where instructors are introducing students to computer science for the first time, and in classrooms with higher populations of groups typically underrepresented in the field of computer science. Here especially, it's advantageous for instructors to have access to curricula that explicitly consider diverse classroom landscapes and grants the freedom necessary to adapt to specific student populations.
|
||||
|
||||
### Making it adaptable
|
||||
|
||||
This kind of adaptability is certainly at work at CSbyUs. Hayley Barton—a member of both the organization's curriculum-making team and its teaching team, and a senior at Duke University majoring in Economics and minoring in Computer Science and Spanish—recently demonstrated the benefits of adaptable curricula during an engagement in the field. Reflecting on her teaching experiences, Barton describes a major reason why curriculum adaptation is necessary in computer science classrooms. "We are seeing the range of students that we work with," she says, "and trying to make the curriculum something that can be tailored to different students."
|
||||
|
||||
An adaptable curriculum means freedom and more options for educators.
|
||||
|
||||
A more adaptable curriculum is necessary for truly challenging students, Barton continues.
|
||||
|
||||
The need for change became most evident to Barton when working students to make their own preliminary apps. Barton collaborated with students who appeared to be at different levels of focus and attention. On the one hand, a group of more advanced students took well to the style of a demonstrative curriculum and remained attentive and engaged to the task. On the other hand, another group of students seemed to have more trouble focusing in the classroom or even being motivated to engage with topics of computer science skills. Witnessing this difference among students, it became important that curriculum would need to be adaptable in multiple ways to be able to engage more students at their level.
|
||||
|
||||
"We want to challenge every student without making it too challenging for any individual student," Barton says. "Thinking about those things definitely feeds into how I'm thinking about the curriculum in terms of making it accessible for all the students."
|
||||
|
||||
As a curriculum-maker, she subsequently uses experiences like this to make changes to the original curriculum.
|
||||
|
||||
"If those other students have one-on-one time themselves, they could be doing even more amazing things with their apps," says Barton.
|
||||
|
||||
Taking this advice, Barton would potentially incorporate into the curriculum more emphasis on cultivating students' sense of ownership in computer science, since this is important to their focus and productivity. For this, students may be afforded that sense of one-on-one time. The result will affect the next round of teachers who use the curriculum.
|
||||
|
||||
For these changes to be effective, the onus is on teachers to notice the dynamics of the classroom. In the future, curriculum adaptation may depend on paying particular attention to and identifying these subtle differences of style of curriculum. Identifying and commenting about these subtleties allows the possibility of applying a different strategy, and these are the changes that are applied to the curriculum.
|
||||
|
||||
Curriculum adaptation should be iterative, as it involves learning from experience, returning to the drawing board, making changes, and finally, utilizing the curriculum again.
|
||||
|
||||
"We've gone through a lot of stages of development," Barton says. "The goal is to have this kind of back and forth, where the curriculum is something that's been tested, where we've used our feedback, and also used other research that we've done, to make it something that's actually impactful."
|
||||
|
||||
Hayley's "back and forth" process is an iterative process of curriculum-making. Between utilizing curricula and modifying curricula, instructors like Hayley can take a once-rigid curriculum and mold it to any degree that the user sees fit—again and again. This iterative process depends on tests performed first in the classroom, and it depends on the teacher's rationale and reflection on how curricula uniquely pans out for them.
|
||||
|
||||
Adaptability of curriculum is the most important principle on which the CSbyUs platform is built. Much like Hayley's process of curriculum-making, curriculum adaptation should be _iterative_ , as it involves learning from experience, returning to the drawing board, making changes, and finally, utilizing the curriculum again. Once launched, the CSbyUS website will document this iterative process.
|
||||
|
||||
The open-focused pedagogy behind the CSByUs platform, then, brings to life the flexibility inherent in the process of curriculum adaptation. First, it invites and collects the valuable first-hand perspectives of real educators working with real curricula to produce real learning. Next, it capitalizes on an iterative processes of development—one familiar to open source programmers—to enable modifications to curriculum (and the documentation of those modifications). Finally, it transforms the way teachers encounter curricula by helping them make selections from different versions of both modified curriculum and "the original." Our platform's open source strategy is crucial to cultivating a hub of flexible curricula for educators.
|
||||
|
||||
Open source practices can be a key difference in making rigid curricula more moldable for educators. Furthermore, since this approach effectively melds open source technologies with open-focused pedagogy, open pedagogy can potentially provide flexibility for educators teaching various curriculum across disciplines.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/4/adaptable-curricula-computer-science
|
||||
|
||||
作者:[Amarachi Achonu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/amarach1/users/johnsontanner3
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_051x_0.png?itok=gIzbmxuI
|
||||
[2]: https://csbyus.herokuapp.com/
|
@ -1,247 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (D as a C Replacement)
|
||||
[#]: via: (https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
D as a C Replacement
|
||||
======
|
||||
|
||||
Sircmpwn (the main developer behind the [Sway Wayland compositor][1]) recently wrote a blog post about how he thinks [Rust is not a good C replacement][2]. I don’t know if he’d like the [D programming language][3] either, but it’s become a C replacement for me.
|
||||
|
||||
### My C to D Story
|
||||
|
||||
My story is like a lot of systems programmers’ stories. At one time, C was my go-to language for most programming. One day I realised that most of my C programs kept reimplementing things from C++: dynamic arrays, better strings, polymorphic classes, etc. So I tried using C++ instead, and at first I loved it. RAII and classes and generics made programming fun again. Even better was the promise that if I read all these books on C++, and learned to master things like template metaprogramming, I’d become an almighty god of systems programming and my code would be amazing. But learning more eventually had the opposite effect: (in hindsight) my code actually got worse, and I fell out of love. I remember reading Scott Meyer’s Effective C++ and realising it was really more about _ineffective_ C++ — and that most of my C++ code until then was broken. Let’s face it: C might be fiddly to use, but it has a kind of elegance, and “elegant” is rarely a word you hear when C++ is involved.
|
||||
|
||||
Apparently, a lot of ex-C C++ programmers end up going back to C. In my case, I discovered D. It’s also not perfect, but I use it because it feels to me a lot more like the `C += 1` that C++ was meant to be. Here’s an example that’s very superficial, but I think is representative. Take this simple C program:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main()
|
||||
{
|
||||
printf("1 + 1 = %d!\n", 1 + 1);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Here’s a version using the C++ standard library:
|
||||
|
||||
```
|
||||
#include <iostream>
|
||||
|
||||
int main()
|
||||
{
|
||||
std::cout << "1 + 1 = " << 1 + 1 << "!" << std::endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Here’s an idiomatic D version:
|
||||
|
||||
```
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
writef("1 + 1 = %d!\n", 1 + 1);
|
||||
}
|
||||
```
|
||||
|
||||
As I said, it’s a superficial example, but I think it shows a general difference in philosophy between C++ and D. (If I wanted to make the difference even clearer, I’d use an example that needed `iomanip` in C++.)
|
||||
|
||||
Update: Unlike in C, [D’s format strings can work with custom types][4]. Stefan Rohe has also pointed out that [D supports compile-time checking of format strings][5] using its metaprogramming features — unlike C which does it through built-in compiler special casing that can’t be used with custom code.
|
||||
|
||||
This [article about C++ member function pointers][6] happens to also be a good explanation of the origins of D. It’s a good read if you’re a programming language nerd like me, but here’s my TL;DR for everyone else: C++ member function pointers are supposed to feel like a low-level feature (like normal function pointers are), but the complexity and diversity of implementations means they’re really high level. The complexity of the implementations is because of the subtleties of the rules about what you can do with them. The author explains the implementations from several C++ compilers, including what’s “easily [his] favorite implementation”: the elegantly simple Digital Mars C++ implementation. (“Why doesn’t everyone else do it this way?”) The DMC compiler was written by Walter Bright, who invented D.
|
||||
|
||||
D has classes and templates and other core features of C++, but designed by someone who has spent a heck of a lot of time thinking about the C++ spec and how things could be simpler. Walter once said that his experiences implementing C++ templates made him consider not including them in D at all, until he realised they didn’t need to be so complex.
|
||||
|
||||
Here’s a quick tour of D from the point of view of incrementally improving C.
|
||||
|
||||
### `-betterC`
|
||||
|
||||
D compilers support a `-betterC` switch that disables [the D runtime][7] and all high-level features that depend on it. The example C code above can be translated directly into betterC:
|
||||
|
||||
```
|
||||
import core.stdc.stdio;
|
||||
|
||||
extern(C):
|
||||
|
||||
int main()
|
||||
{
|
||||
printf("1 + 1 = %d!\n", 1 + 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
$ dmd -betterC example.d
|
||||
$ ./example
|
||||
1 + 1 = 2!
|
||||
```
|
||||
|
||||
The resulting binary looks a lot like the equivalent C binary. In fact, if you rewrote a C library in betterC, it could still link to code that had been compiled against the C version, and work without changes. Walter Bright wrote a good article walking through all [the changes needed to convert a real C program to betterC][8].
|
||||
|
||||
You don’t actually need the `-betterC` switch just to write C-like code in D. It’s only needed in special cases where you simply can’t have the D runtime. But let me point out some of my favourite D features that still work with `-betterC`.
|
||||
|
||||
#### `static assert()`
|
||||
|
||||
This allows verifying some assumption at compile time.
|
||||
|
||||
```
|
||||
static assert(kNumInducers < 16);
|
||||
```
|
||||
|
||||
Systems code often makes assumptions about alignment or structure size or other things. With `static assert`, it’s possible to not only document these assumptions, but trigger a compilation error if someone breaks them by adding a struct member or something.
|
||||
|
||||
#### Slices
|
||||
|
||||
Typical C code is full of pointer/length pairs, and it’s a common bug for them to go out of sync. Slices are a simple and super-useful abstraction for a range of memory defined by a pointer and length. Instead of code like this:
|
||||
|
||||
```
|
||||
buffer_p += offset;
|
||||
buffer_len -= offset; // Got to update both
|
||||
```
|
||||
|
||||
You can use this much-less-bug-prone alternative:
|
||||
|
||||
```
|
||||
buffer = buffer[offset..$];
|
||||
```
|
||||
|
||||
A slice is nothing but a pointer/length pair with first-class syntactic support.
|
||||
|
||||
Update: [Walter Bright has written more about pointer/length pair problem in C][9].
|
||||
|
||||
#### Compile Time Function Evaluation (CTFE)
|
||||
|
||||
[Many functions can be evaluated at compile time.][10]
|
||||
|
||||
```
|
||||
long factorial(int n) pure
|
||||
{
|
||||
assert (n >= 0 && n <= 20);
|
||||
long ret = 1;
|
||||
foreach (j; 2..n+1) ret *= j;
|
||||
return ret;
|
||||
}
|
||||
|
||||
// Statically allocated array
|
||||
// Size is calculated at compile time
|
||||
Permutation[factorial(kNumThings)] permutation_table;
|
||||
```
|
||||
|
||||
#### `scope` Guards
|
||||
|
||||
Code in one part of a function is often coupled to cleanup code in a later part. Failing to match this code up correctly is another common source of bugs (especially when multiple control flow paths are involved). D’s scope guards make it simple to get this stuff right:
|
||||
|
||||
```
|
||||
p = malloc(128);
|
||||
// free() will be called when the current scope exits
|
||||
scope (exit) free(p);
|
||||
// Put whatever if statements, or loops, or early returns you like here
|
||||
```
|
||||
|
||||
You can even have multiple scope guards in a scope, or have nested scopes. The cleanup routines will be called when needed, in the right order.
|
||||
|
||||
D also supports RAII using struct destructors.
|
||||
|
||||
#### `const` and `immutable`
|
||||
|
||||
It’s a popular myth that `const` in C and C++ is useful for compiler optimisations. Walter Bright has complained that every time he thought of a new `const`-based optimisation for C++, he eventually discovered it didn’t work in real code. So he made some changes to `const` semantics for D, and added `immutable`. You can read more in the [D `const` FAQ][11].
|
||||
|
||||
#### `pure`
|
||||
|
||||
Functional purity can be enforced. I’ve written about [some of the benefits of the `pure` keyword before][12].
|
||||
|
||||
#### `@safe`
|
||||
|
||||
SafeD is a subset of D that forbids risky language features like pointer typecasts and inline assembly. Code marked `@safe` is enforced by the compiler to not use these features, so that risky code can be limited to the small percentage of the application that needs it. You can [read more about SafeD in this article][13].
|
||||
|
||||
#### Metaprogramming
|
||||
|
||||
Like I hinted earlier, metaprogramming has got a bad reputation among some C++ programmers. But [D has the advantage of making metaprogramming less interesting][14], so D programmers tend to just do it when it’s useful, and not as a fun puzzle.
|
||||
|
||||
D has great support for [compile-time reflection][15]. In most cases, compile-time reflection can solve the same problems as run-time reflection, but with compile-time guarantees. Compile-time reflection can also be used to implement run-time reflection where it’s truly needed.
|
||||
|
||||
Need the names of an enumerated type as an array?
|
||||
|
||||
```
|
||||
enum State
|
||||
{
|
||||
stopped,
|
||||
starting,
|
||||
running,
|
||||
stopping,
|
||||
}
|
||||
|
||||
string[] state_names = [__traits(allMembers, State)];
|
||||
```
|
||||
|
||||
Thanks to D’s metaprogramming, the standard library has many nice, type-safe tools, like this [compile-time checked bit flag enum][16].
|
||||
|
||||
I’ve written more about [using metaprogramming in `-betterC` code][17].
|
||||
|
||||
#### No Preprocessor
|
||||
|
||||
Okay, this a non-feature as a feature, but D has no equivalent to C’s preprocessor. All its sane use-cases are replaced with native language features, like [manifest constants][18] and [templates][19]. That includes proper [modules][20] support, which means D can break free of the limitations of that old `#include` hack.
|
||||
|
||||
### Normal D
|
||||
|
||||
C-like D code can be written and compiled as normal D code without the `-betterC` switch. The difference is that normal D code is linked to the D runtime, which supports higher-level features, the most obvious ones being garbage collection and object-oriented classes. Some people have confused the D runtime with something like the Java virtual machine, so I once wrote [an article explaining exactly what it is][7] (spoiler: it’s like the C and C++ runtimes, but with more features).
|
||||
|
||||
Even with the runtime, compiled D is not much different from compiled C++. Sometimes I like to write throwaway code to, say, experiment with a new Linux system call or something. I used to think the best language for that is plain old C, but now I always use D.
|
||||
|
||||
D doesn’t natively support `#include`ing C code, but for nice APIs that don’t have a lot of preprocessor craziness (like most of Linux) I usually just write [ad-hoc bindings][21]. Many popular C libraries have maintained D bindings, which can be found in the [Dub registry][22], or in [the Derelict project][23], or in the newer [BindBC project][24]. There are also tools for automated bindings, including the awesome [dpp tool][25] that brings `#include` support directly to D code.
|
||||
|
||||
Update: This post has got a lot of attention from people who’ve never heard of D before. If you’re interested in learning D, I recommend
|
||||
|
||||
* [The DLang Tour][26] for a quick dive into the language
|
||||
* [Ali Çehreli’s Programming in D book][27] if you prefer something in-depth
|
||||
* [The D forum Learn group][28] or [IRC channel][29] to get answers to your questions
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://swaywm.org/
|
||||
[2]: https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html
|
||||
[3]: https://dlang.org
|
||||
[4]: https://wiki.dlang.org/Defining_custom_print_format_specifiers
|
||||
[5]: https://dlang.org/phobos/std_format.html#format
|
||||
[6]: https://www.codeproject.com/Articles/7150/Member-Function-Pointers-and-the-Fastest-Possible
|
||||
[7]: /2017/06/04/what_is_the_d_runtime.html
|
||||
[8]: https://dlang.org/blog/2018/06/11/dasbetterc-converting-make-c-to-d/
|
||||
[9]: https://www.digitalmars.com/articles/b44.html
|
||||
[10]: https://dlang.org/spec/function.html#interpretation
|
||||
[11]: https://dlang.org/articles/const-faq.html
|
||||
[12]: /2016/03/28/dirtying_pure_functions_can_be_useful.html
|
||||
[13]: https://dlang.org/blog/2016/09/28/how-to-write-trusted-code-in-d/
|
||||
[14]: https://epi.github.io/2017/03/18/less_fun.html
|
||||
[15]: https://dlang.org/spec/traits.html
|
||||
[16]: https://dlang.org/phobos/std_typecons.html#BitFlags
|
||||
[17]: /2018/08/13/inheritance_and_polymorphism_2.html
|
||||
[18]: https://dlang.org/spec/enum.html#manifest_constants
|
||||
[19]: https://tour.dlang.org/tour/en/basics/templates
|
||||
[20]: https://ddili.org/ders/d.en/modules.html
|
||||
[21]: https://wiki.dlang.org/Bind_D_to_C
|
||||
[22]: https://code.dlang.org/
|
||||
[23]: https://github.com/DerelictOrg
|
||||
[24]: https://github.com/BindBC
|
||||
[25]: https://github.com/atilaneves/dpp
|
||||
[26]: https://tour.dlang.org/
|
||||
[27]: https://ddili.org/ders/d.en/index.html
|
||||
[28]: https://forum.dlang.org/group/learn
|
||||
[29]: irc://irc.freenode.net/d
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Anti-lasers could give us perfect antennas, greater data capacity)
|
||||
[#]: via: (https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Anti-lasers could give us perfect antennas, greater data capacity
|
||||
======
|
||||
Anti-lasers get close to providing a 100% efficient signal channel for data, say engineers.
|
||||
![Guirong Hao / Valery Brozhinsky / Getty Images][1]
|
||||
|
||||
Playing laser light backwards could adjust data transmission signals so that they perfectly match receiving antennas. The fine-tuning of signals like this, not achieved with such detail before, could create more capacity for ever-increasing data demand.
|
||||
|
||||
"Imagine, for example, that you could adjust a cell phone signal exactly the right way, so that it is perfectly absorbed by the antenna in your phone," says Stefan Rotter of the Institute for Theoretical Physics of Technische Universität Wien (TU Wien) in a [press release][2].
|
||||
|
||||
Rotter is talking about “Random Anti-Laser,” a project he has been a part of. The idea behind it is that if one could time-reverse a laser, then the laser (right now considered the best light source ever built) becomes the best available light absorber. Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
|
||||
|
||||
**[ Related:[What is 5G wireless? How it will change networking as we know it?][3] ]**
|
||||
|
||||
“The easiest way to think about this process is in terms of a movie showing a conventional laser sending out laser light, which is played backwards,” the TU Wein article says. The anti-laser is the exact opposite of the laser — instead of sending specific colors perfectly when energy is applied, it receives specific colors perfectly.
|
||||
|
||||
Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
|
||||
|
||||
Counter-intuitively, it’s the random scattering of light in all directions that’s behind the engineering. However, the Vienna, Austria, university group performs precise calculations on those scattering, splitting signals. That lets the researchers harness the light.
|
||||
|
||||
### How the anti-laser technology works
|
||||
|
||||
The microwave-based, experimental device the researchers have built in the lab to prove the idea doesn’t just potentially apply to cell phones; wireless internet of things (IoT) devices would also get more data throughput. How it works: The device consists of an antenna-containing chamber encompassed by cylinders, all arranged haphazardly, the researchers explain. The cylinders distribute an elaborate, arbitrary wave pattern “similar to [throwing] stones in a puddle of water, at which water waves are deflected.”
|
||||
|
||||
Measurements then take place to identify exactly how the signals return. The team involved, which also includes collaborators from the University of Nice, France, then “characterize the random structure and calculate the wave front that is completely swallowed by the central antenna at the right absorption strength.” Ninety-nine point eight percent is absorbed, making it remarkably and virtually perfect. Data throughput, range, and other variables thus improve.
|
||||
|
||||
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
|
||||
|
||||
Achieving perfect antennas has been pretty much only theoretically possible for engineers to date. Reflected energy (RF back into the transmitter from antenna inefficiencies) has always been an issue in general. Reflections from surfaces, too, have been always been a problem.
|
||||
|
||||
“Think about a mobile phone signal that is reflected several times before it reaches your cell phone,” Rotter says. It’s not easy to get the tuning right — as the antennas’ physical locations move, reflected surfaces become different.
|
||||
|
||||
### Scattering lasers
|
||||
|
||||
Scattering, similar to that used in this project, is becoming more important in communications overall. “Waves that are being scattered in a complex way are really all around us,” the group says.
|
||||
|
||||
An example is random-lasers (which the group’s anti-laser is based on) that unlike traditional lasers, do not use reflective surfaces but trap scattered light and then “emit a very complicated, system-specific laser field when supplied with energy.” The anti-random-laser developed by Rotter and his group simply reverses that in time:
|
||||
|
||||
“Instead of a light source that emits a specific wave depending on its random inner structure, it is also possible to build the perfect absorber.” The anti-random-laser.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/03/data_cubes_transformation_conversion_by_guirong_hao_gettyimages-1062387214_plus_abstract_binary_by_valerybrozhinsky_gettyimages-865457032_3x2_2400x1600-100790211-large.jpg
|
||||
[2]: https://www.tuwien.ac.at/en/news/news_detail/article/126574/
|
||||
[3]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,75 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nyansa’s Voyance expands to the IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Nyansa’s Voyance expands to the IoT
|
||||
======
|
||||
|
||||
![Brandon Mowinkel \(CC0\)][1]
|
||||
|
||||
Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
|
||||
|
||||
Voyance – a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior – has been around for two years now, and Nyansa says that it’s being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [Most powerful Internet of Things companies][3]
|
||||
* [10 Hot IoT startups to watch][4]
|
||||
* [The 6 ways to make money in IoT][5]
|
||||
* [][6] [Blockchain, service-centric networking key to IoT success][7]
|
||||
* [Getting grounded in IoT networking and security][8]
|
||||
* [Building IoT-ready networks must become a priority][9]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][10]
|
||||
|
||||
|
||||
|
||||
It’s a software-only product (available either via public SaaS or private cloud) that works by scanning a customer’s network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
|
||||
|
||||
The process doesn’t happen instantaneously, particularly the creation of the baseline, but it’s designed to be minimally invasive to existing network management frameworks and easy to implement.
|
||||
|
||||
Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer – Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices – said that Voyance IoT has offered a new level of visibility into the business’ complex array of connected diagnostic and treatment machines.
|
||||
|
||||
“In the past we didn’t have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now we’re able to do that,” said CISO Thad Phillips.
|
||||
|
||||
While spiraling network complexity isn’t an issue confined to the IoT, there’s a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices aren’t particularly secure.
|
||||
|
||||
“They’re not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, that’s sort of a big problem there as well,” said Anand Srinivas, Nyansa’s co-founder and CTO.
|
||||
|
||||
Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the system’s designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that they’re functioning correctly.
|
||||
|
||||
“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “There’s always going to be more and more things connecting to IP networks, and it’s just going to be a question of building up a database.”
|
||||
|
||||
Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
|
||||
[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
|
||||
[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
|
||||
|
||||
Two tools to help visualize and simplify your data-driven operations
|
||||
======
|
||||
Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planet’s Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
|
||||
![danleap][1]
|
||||
|
||||
**Build the picture: Visualize your data**
|
||||
|
||||
The Internet of Things (IoT), 5G, smart technology, virtual reality – all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
|
||||
|
||||
Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
|
||||
|
||||
**See the upward graphical trend with graph databases**
|
||||
|
||||
Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
|
||||
|
||||
Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
|
||||
|
||||
The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
|
||||
|
||||
Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture – this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
|
||||
|
||||
**Change the tide of data with delta-based federation**
|
||||
|
||||
New data, updated data, corrected data, deleted data – all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSP’s Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
|
||||
|
||||
Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
|
||||
|
||||
A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
|
||||
|
||||
**Ride the wave**
|
||||
|
||||
25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
|
||||
|
||||
[Watch this video to learn more about Blue Planet][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
|
||||
|
||||
作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
|
||||
[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4
|
@ -1,146 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What SDN is and where it’s going)
|
||||
[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
What SDN is and where it’s going
|
||||
======
|
||||
Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
|
||||
![seedkin / Getty Images][1]
|
||||
|
||||
Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
|
||||
|
||||
SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
|
||||
|
||||
**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
|
||||
|
||||
OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
|
||||
|
||||
In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
|
||||
|
||||
"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
|
||||
|
||||
SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
|
||||
|
||||
IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
|
||||
|
||||
In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
|
||||
|
||||
“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
|
||||
|
||||
### **What is SDN? **
|
||||
|
||||
The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
|
||||
|
||||
IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
|
||||
|
||||
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
|
||||
|
||||
Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
|
||||
|
||||
“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
|
||||
|
||||
### How does SDN support edge computing, IoT and remote access?
|
||||
|
||||
A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments – each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
|
||||
|
||||
Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
|
||||
|
||||
Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components don’t suffer latency.
|
||||
|
||||
SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
|
||||
|
||||
“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
|
||||
|
||||
### **How does SDN support intent-based networking?**
|
||||
|
||||
Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
|
||||
|
||||
“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
|
||||
|
||||
IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
|
||||
|
||||
For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
|
||||
|
||||
“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
|
||||
|
||||
While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
|
||||
|
||||
### **How does SDN help customers with security?**
|
||||
|
||||
SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
|
||||
|
||||
“For example, if a customer has an IoT group it doesn’t feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper than traditional gear.”
|
||||
|
||||
The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Cisco’s Nexus and ACI product lines.
|
||||
|
||||
"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
|
||||
|
||||
A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
|
||||
|
||||
“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
|
||||
|
||||
### **What is SDN’s role in cloud computing?**
|
||||
|
||||
SDN’s role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
|
||||
|
||||
Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
|
||||
|
||||
“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Cisco’s ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
|
||||
|
||||
Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDC’s Casemore.
|
||||
|
||||
“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
|
||||
|
||||
### Where does SD-WAN fit in?
|
||||
|
||||
The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
|
||||
|
||||
At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
|
||||
|
||||
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
|
||||
|
||||
"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Cisco’s Enterprise Networking Business, said a Network World [article][18] earlier this year.
|
||||
|
||||
It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
|
||||
|
||||
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
|
||||
[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
|
||||
[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
|
||||
[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
|
||||
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
|
||||
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
|
||||
[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
|
||||
[14]: https://www.linkedin.com/in/boblaliberte90/
|
||||
[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
|
||||
[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
|
||||
[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Clearing up confusion between edge and cloud)
|
||||
[#]: via: (https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Clearing up confusion between edge and cloud
|
||||
======
|
||||
The benefits of edge computing are not just hype; however, that doesn’t mean you should throw cloud computing initiatives to the wind.
|
||||
![iStock][1]
|
||||
|
||||
Edge computing and cloud computing are sometimes discussed as if they’re mutually exclusive approaches to network infrastructure. While they may function in different ways, utilizing one does not preclude the use of the other.
|
||||
|
||||
Indeed, [Futurum Research][2] found that, among companies that have deployed edge projects, only 15% intend to separate these efforts from their cloud computing initiatives — largely for security or compartmentalization reasons.
|
||||
|
||||
So then, what’s the difference, and how do edge and cloud work together?
|
||||
|
||||
**Location, location, location**
|
||||
|
||||
Moving data and processing to the cloud, as opposed to on-premises data centers, has enabled the business to move faster, more efficiently, less expensively — and in many cases, more securely.
|
||||
|
||||
Yet cloud computing is not without challenges, particularly:
|
||||
|
||||
* Users will abandon a graphics-heavy website if it doesn’t load quickly. So, imagine the lag for compute-heavy processing associated artificial intelligence or machine learning functions.
|
||||
|
||||
* The strength of network connectivity is crucial for large data sets. As enterprises increasingly generate data, particularly with the adoption of Internet of Things (IoT), traditional cloud connections will be insufficient.
|
||||
|
||||
|
||||
|
||||
|
||||
To make up for the lack of speed and connectivity with cloud, processing for mission-critical applications will need to occur closer to the data source. Maybe that’s a robot on the factory floor, digital signage at a retail store, or an MRI machine in a hospital. That’s edge computing, which reduces the distance the data must travel and thereby boosts the performance and reliability of applications and services.
|
||||
|
||||
**One doesn’t supersede the other**
|
||||
|
||||
That said, the benefits gained by edge computing don’t negate the need for cloud. In many cases, IT will now become a decision-maker in terms of best usage for each. For example, edge might make sense for devices running processing-power-hungry apps such as IoT, artificial intelligence, and machine learning. And cloud will work for apps where time isn’t necessarily of the essence, like inventory or big-data projects.
|
||||
|
||||
> “By being able to triage the types of data processing on the edge versus that heading to the cloud, we can keep both systems running smoothly – keeping our customers and employees safe and happy,” [writes Daniel Newman][3], principal analyst for Futurum Research.
|
||||
|
||||
And in reality, edge will require cloud. “To enable digital transformation, you have to build out the edge computing side and connect it with the cloud,” [Tony Antoun][4], senior vice president of edge and digital at GE Digital, told _Automation World_. “It’s a journey from the edge to the cloud and back, and the cycle keeps continuing. You need both to enrich and boost the business and take advantage of different points within this virtual lifecycle.”
|
||||
|
||||
**Ensuring resiliency of cloud and edge**
|
||||
|
||||
Both edge and cloud computing require careful consideration to the underlying processing power. Connectivity and availability, no matter the application, are always critical measures.
|
||||
|
||||
But especially for the edge, it will be important to have a resilient architecture. Companies should focus on ensuring security, redundancy, connectivity, and remote management capabilities.
|
||||
|
||||
Discover how your edge and cloud computing environments can coexist at [APC.com][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-612507606-100793995-large.jpg
|
||||
[2]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
|
||||
[3]: https://futurumresearch.com/edge-computing-data-centers/
|
||||
[4]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -1,58 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Startup MemVerge combines DRAM and Optane into massive memory pool)
|
||||
[#]: via: (https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Startup MemVerge combines DRAM and Optane into massive memory pool
|
||||
======
|
||||
MemVerge bridges two technologies that are already a bridge.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
A startup called MemVerge has announced software to combine regular DRAM with Intel’s Optane DIMM persistent memory into a single clustered storage pool and without requiring any changes to applications.
|
||||
|
||||
MemVerge has been working with Intel in developing this new hardware platform for close to two years. It offers what it calls a Memory-Converged Infrastructure (MCI) to allow existing apps to use Optane DC persistent memory. It's architected to integrate seamlessly with existing applications.
|
||||
|
||||
**[ Read also:[Mass data fragmentation requires a storage rethink][2] ]**
|
||||
|
||||
Optane memory is designed to sit between high-speed memory and [solid-state drives][3] (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor.
|
||||
|
||||
Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory.
|
||||
|
||||
As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.
|
||||
|
||||
### Why didn't Intel think of this?
|
||||
|
||||
All of which really begs a question I can’t get answered, at least not immediately: Why didn’t Intel think of this when it created Optane in the first place?
|
||||
|
||||
MemVerge has what it calls Distributed Memory Objects (DMO) hypervisor technology to provide a logical convergence layer to run data-intensive workloads at memory speed with guaranteed data consistency across multiple systems. This allows Optane memory to process and derive insights from the enormous amounts of data in real time.
|
||||
|
||||
That’s because MemVerge’s technology makes random access as fast as sequential access. Normally, random access is slower than sequential because of all the jumping around with random access vs. reading one sequential file. But MemVerge can handle many small files as fast as it handles one large file.
|
||||
|
||||
MemVerge itself is actually software, with a single API for both DRAM and Optane. It’s also available via a hyperconverged server appliance that comes with 2 Cascade Lake processors, up to 512 GB DRAM, 6TB of Optane memory, and 360TB of NVMe physical storage capacity.
|
||||
|
||||
However, all of this is still vapor. MemVerge doesn’t expect to ship a beta product until at least June.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3323580/mass-data-fragmentation-requires-a-storage-rethink.html
|
||||
[3]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,119 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Want to the know future of IoT? Ask the developers!)
|
||||
[#]: via: (https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Want to the know future of IoT? Ask the developers!
|
||||
======
|
||||
A new survey of IoT developers reveals that connectivity, performance, and standards are growing areas of concern as IoT projects hit production.
|
||||
![Avgust01 / Getty Images][1]
|
||||
|
||||
It may be a cliché that software developers rule the world, but if you want to know the future of an important technology, it pays to look at what the developers are doing. With that in mind, there are some real, on-the-ground insights for the entire internet of things (IoT) community to be gained in a new [survey of more than 1,700 IoT developers][2] (pdf) conducted by the [Eclipse Foundation][3].
|
||||
|
||||
### IoT connectivity concerns
|
||||
|
||||
Perhaps not surprisingly, security topped the list of concerns, easily outpacing other IoT worries. But that's where things begin to get interesting. More than a fifth (21%) of IoT developers cited connectivity as a challenge, followed by data collection and analysis (19%), performance (18%), privacy (18%), and standards (16%).
|
||||
|
||||
Connectivity rose to second place after being the number three IoT concern for developers last year. Worries over security and data collection and analysis, meanwhile, actually declined slightly year over year. (Concerns over performance, privacy, and standards also increased significantly from last year.)
|
||||
|
||||
**[ Learn more:[Download a PDF bundle of five essential articles about IoT in the enterprise][4] ]**
|
||||
|
||||
“If you look at the list of developers’ top concerns with IoT in the survey,” said [Mike Milinkovich][5], executive director of the Eclipse Foundation via email, “I think connectivity, performance, and standards stand out — those are speaking to the fact that the IoT projects are getting real, that they’re getting out of sandboxes and into production.”
|
||||
|
||||
“With connectivity in IoT,” Milinkovich continued, “everything seems straightforward until you have a sensor in a corner somewhere — narrowband or broadband — and physical constraints make it hard to connect."
|
||||
|
||||
He also cited a proliferation of incompatible technologies that is driving developer concerns over connectivity.
|
||||
|
||||
![][6]
|
||||
|
||||
### IoT standards and interoperability
|
||||
|
||||
Milinkovich also addressed one of [my personal IoT bugaboos: interoperability][7]. “Standards is a proxy for interoperability” among products from different vendors, he explained, which is an “elusive goal” in industrial IoT (IIoT).
|
||||
|
||||
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][8] ]**
|
||||
|
||||
“IIoT is about breaking down the proprietary silos and re-tooling the infrastructure that’s been in our factories and logistics for many years using OSS standards and implementations — standard sets of protocols as opposed to vendor-specific protocols,” he said.
|
||||
|
||||
That becomes a big issue when you’re deploying applications in the field and different manufacturers are using different protocols or non-standard extensions to existing protocols and the machines can’t talk to each other.
|
||||
|
||||
**[ Also read:[Interoperability is the key to IoT success][7] ]**
|
||||
|
||||
“This ties back to the requirement of not just having open standards, but more robust implementations of those standards in open source stacks,” Milinkovich said. “To keep maturing, the market needs not just standards, but out-of-the-box interoperability between devices.”
|
||||
|
||||
“Performance is another production-grade concern,” he said. “When you’re in development, you think you know the bottlenecks, but then you discover the real-world issues when you push to production.”
|
||||
|
||||
### Cloudy developments for IoT
|
||||
|
||||
The survey also revealed that in some ways, IoT is very much aligned with the larger technology community. For example, IoT use of public and hybrid cloud architectures continues to grow. Amazon Web Services (AWS) (34%), Microsoft Azure (23%), and Google Cloud Platform (20%) are the leading IoT cloud providers, just as they are throughout the industry. If anything, AWS’ lead may be smaller in the IoT space than it is in other areas, though reliable cloud-provider market share figures are notoriously hard to come by.
|
||||
|
||||
But Milinkovich sees industrial IoT as “a massive opportunity for hybrid cloud” because many industrial IoT users are very concerned about minimizing latency with their factory data, what he calls “their gold.” He sees factories moving towards hybrid cloud environments, leveraging “modern infrastructure technology like Kubernetes, and building around open protocols like HTTP and MQTT while getting rid of the older proprietary protocols.”
|
||||
|
||||
### How IoT development is different
|
||||
|
||||
In some ways, the IoT development world doesn’t seem much different than wider software development. For example, the top IoT programming languages mirror [the popularity of those languages][9] over all, with C and Java ruling the roost. (C led the way on constrained devices, while Java was the top choice for gateway and edge nodes, as well as the IoT cloud.)
|
||||
|
||||
![][10]
|
||||
|
||||
But Milinkovich noted that when developing for embedded or constrained devices, the programmer’s interface to a device could be through any number of esoteric hardware connectors.
|
||||
|
||||
“You’re doing development using emulators and simulators, and it’s an inherently different and more complex interaction between your dev environment and the target for your application,” he said. “Sometimes hardware and software are developed in tandem, which makes it even more complicated.”
|
||||
|
||||
For example, he explained, building an IoT solution may bring in web developers working on front ends using JavaScript and Angular, while backend cloud developers control cloud infrastructure and embedded developers focus on building software to run on constrained devices.
|
||||
|
||||
No wonder IoT developers have so many things to worry about.
|
||||
|
||||
**More about IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][11]
|
||||
* [What is edge computing and how it’s changing the network][12]
|
||||
* [Most powerful Internet of Things companies][13]
|
||||
* [10 Hot IoT startups to watch][14]
|
||||
* [The 6 ways to make money in IoT][15]
|
||||
* [What is digital twin technology? [and why it matters]][16]
|
||||
* [Blockchain, service-centric networking key to IoT success][17]
|
||||
* [Getting grounded in IoT networking and security][4]
|
||||
* [Building IoT-ready networks must become a priority][18]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][19]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
|
||||
[2]: https://drive.google.com/file/d/17WEobD5Etfw5JnoKC1g4IME_XCtPNGGc/view
|
||||
[3]: https://www.eclipse.org/
|
||||
[4]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[5]: https://blogs.eclipse.org/post/mike-milinkovich/measuring-industrial-iot%E2%80%99s-evolution
|
||||
[6]: https://images.idgesg.net/images/article/2019/04/top-developer-concerns-2019-eclipse-foundation-100793974-large.jpg
|
||||
[7]: https://www.networkworld.com/article/3204529/interoperability-is-the-key-to-iot-success.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
|
||||
[9]: https://blog.newrelic.com/technology/popular-programming-languages-2018/
|
||||
[10]: https://images.idgesg.net/images/article/2019/04/top-iot-programming-languages-eclipse-foundation-100793973-large.jpg
|
||||
[11]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[12]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[13]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[14]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[15]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[16]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[17]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: ('Fiber-in-air' 5G network research gets funding)
|
||||
[#]: via: (https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
'Fiber-in-air' 5G network research gets funding
|
||||
======
|
||||
A consortium of tech companies and universities plan to aggressively investigate the exploitation of D-Band to develop a new variant of 5G infrastructure.
|
||||
![Peshkova / Getty Images][1]
|
||||
|
||||
Wireless transmission at data rates of around 45gbps could one day be commonplace, some engineers say. “Fiber-in-air” is how the latest variant of 5G infrastructure is being described. To get there, a Britain-funded consortium of chip makers, universities, and others intend to aggressively investigate the exploitation of D-Band. That part of the radio spectrum is at 151-174.8 GHz in millimeter wavelengths (mm-wave) and hasn’t been used before.
|
||||
|
||||
The researchers intend to do it by riffing on a now roughly 70-year-old gun-like electron-sending device that can trace its roots back through the annals of radio history: The Traveling Wave Tube, or TWT, an electron gun-magnet-combo that was used in the development of television and still brings space images back to Earth.
|
||||
|
||||
**[ Also read:[The time of 5G is almost here][2] ]**
|
||||
|
||||
D-Band, the spectrum the researchers want to use, has the advantage that it’s wide, so theoretically it should be good for fast, copious data rates. The problem with it though, and the reason it hasn’t thus far been used, is that it’s subject to monkey-wrenching from atmospheric conditions such as rain, explains IQE, a semiconductor wafer and materials producer involved in the project, in a [press release][3]. The team says attenuation is fixable, though. Their solution is the now-aging TWTs.
|
||||
|
||||
The group, which includes BT, Filtronic, Glasgow University, Intel, Nokia Bell Labs, Optocap, and Teledyne e2v, has secured funding of the equivalent of $1.12 million USD from the U.K.’s [Engineering and Physical Sciences Research Council (EPSRC)][4]. That’s the principal public funding body for engineering science research there.
|
||||
|
||||
### Tapping the power of TWTs
|
||||
|
||||
The DLINK system, as the team calls it, will use a high-power vacuum TWT with a special, newly developed tunneling diode and a modulator. Two bands of 10 GHz, each will deliver the throughput, [explains Lancaster University on its website][5]. The tubes are, in fact, special amplifiers that produce 10 Watts. That’s 10 times what an equivalent solid-state solution would likely produce at the same spot in the band, they say. Energy is basically sent from the electron beam to an electric field generated by the input signal.
|
||||
|
||||
Despite TWTs being around for eons, “no D-band TWTs are available in the market.” The development of one is key to these fiber-in-air speeds, the researchers say.
|
||||
|
||||
They will include “unprecedented data rate and transmission distance,” IQE writes.
|
||||
|
||||
The TWT device, although used extensively in space wireless communications since its invention in the 1950s, is overlooked as a significant contributor to global communications systems, say a group of French researchers working separately from this project, who recently argue that TWTs should be given more recognition.
|
||||
|
||||
TWT’s are “the unsung heroes of space exploration,” the Aix-Marseille Université researchers say in [an article on publisher Springer’s website][6]. Springer is promoting the group's 2019-published [paper][7] in the European Physical Journal H in which they delve into the history of the simple electron gun and magnet device.
|
||||
|
||||
“Its role in the history of wireless communications and in the space conquest is significant, but largely ignored,” they write in their paper.
|
||||
|
||||
They will be pleased to hear it maybe isn’t going away anytime soon.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/abstract_data_coding_matrix_structure_network_connections_by_peshkova_gettyimages-897683944_2400x1600-100788487-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[3]: https://www.iqep.com/media/2019/03/iqe-partners-in-key-wireless-communications-project-for-5g-infrastructure-(1)/
|
||||
[4]: https://epsrc.ukri.org/
|
||||
[5]: http://wp.lancs.ac.uk/dlink/
|
||||
[6]: https://www.springer.com/gp/about-springer/media/research-news/all-english-research-news/traveling-wave-tubes--the-unsung-heroes-of-space-exploration/16578434
|
||||
[7]: https://link.springer.com/article/10.1140%2Fepjh%2Fe2018-90023-1
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,186 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open architecture and open source – The new wave for SD-WAN?)
|
||||
[#]: via: (https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Open architecture and open source – The new wave for SD-WAN?
|
||||
======
|
||||
As networking continues to evolve, you certainly don't want to break out a forklift every time new technologies are introduced. Open architecture would allow you to replace the components of a system, and give you more flexibility to control your own networking destiny.
|
||||
![opensource.com \(CC BY-SA 2.0\)][1]
|
||||
|
||||
I recently shared my thoughts about the [role of open source in networking][2]. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.
|
||||
|
||||
The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.
|
||||
|
||||
Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.
|
||||
|
||||
**[ Don’t miss[customer reviews of top remote access tools][3] and see [the most powerful IoT companies][4] . | Get daily insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
### Performance (hardware vs software)
|
||||
|
||||
There has always been a misconception that the hardware-based platforms are faster due to the hardware acceleration that exists in the network interface controller (NIC). However, this is a mistaken belief. Nowadays, software platforms can reach similar performance levels as compared to hardware-based platforms.
|
||||
|
||||
Initially, people viewed hardware as a performance-based vehicle but today this does not hold true anymore. Even the bigger vendors are switching to software-based platforms. We are beginning to see this almost everywhere in networking.
|
||||
|
||||
### SD-WAN and open source
|
||||
|
||||
SD-WAN really took off quite rapidly due to the availability of open source. It enabled the vendors to leverage all the available open source components and then create their solution on top. By and large, SD-WAN vendors used the open source as the foundation of their solution and then added additional proprietary code over the baseline.
|
||||
|
||||
However, even when using various open source components, there is still a lot of work left for these vendors to make it to a complete SD-WAN solution, even for reaching a baseline of centrally managed routers with flexible network architecture control, not to talk about complete feature set of SD-WAN.
|
||||
|
||||
The result of the work done by these vendors is still closed products so the fact they are using open source components in their products is merely a time-to-market advantage but not a big benefit to the end users (enterprises) or service providers launching hosted services with these products. They are still limited in flexibility and vendor diversity is only achieved through a multi-vendor strategy which in practice means launching multiple silo services each based on a different SD-WAN vendor without real selection of the technologies that make each of the SD-WAN services they launch.
|
||||
|
||||
I recently came across a company called [Flexiwan][6], their goal is to fundamentally change this limitation of SD-WAN by offering a full open source solution that, as they say, “includes integration points in the core of the system that allow for 3rd party logic to be integrated in an efficient way.” They call this an open architecture, which, in practical terms, means a service provider or enterprise can integrate his own application logic into the core of the SD-WAN router…or select best of breed sub-technologies or applications instead of having these dictated by the vendor himself. I believe there is the possibility of another wave of SD-WAN with a fully open source and open architecture to SD-WAN.
|
||||
|
||||
This type of architecture brings many benefits to users, enterprises and service providers, especially when compared to the typical lock-in of bigger vendors, such as Cisco and VMware.
|
||||
|
||||
With an open source open architecture, it’s easier to control the versions and extend more flexibility by using the software offered by different providers. It offers the ability to switch providers, not to mention being easier to install and upgrade the versions.
|
||||
|
||||
### SD-WAN, open source and open architecture
|
||||
|
||||
An SD-WAN solution that is an open source with open architecture provides a modular and decomposed type of SD-WAN. This enables the selection of elements to provide a solution.
|
||||
|
||||
For example, enterprises and service providers can select the best-of-breed technologies from independent vendors, such as deep packet inspection (DPI), security, wide area network (WAN) optimization, session border controller (SBC), VoIP and other traffic specific optimization logic.
|
||||
|
||||
Some SD-WAN vendors define an open architecture in such a way that they just have a set of APIs, for example, northbound APIs, to enable one to build management or do service chaining. This is one approach to an open architecture but in fact, it’s pretty limited since it does not bring the full benefits that an open architecture should offer.
|
||||
|
||||
### Open source and the fear of hacking
|
||||
|
||||
However, when I think about an open source and open architecture for SD-WAN, the first thing that comes to mind is bad actors. What about the code? If it’s an open source, the bad actor can find vulnerabilities, right?
|
||||
|
||||
The community is a powerful force and will fix any vulnerability. Also with open source, the vendor, who is the one responsible for the open source component will fix the vulnerability much faster than a closed solution, where you are unaware of the vulnerability until a fix is released.
|
||||
|
||||
### The SD-WAN evolution
|
||||
|
||||
Before we go any further, let’s examine the history of SD-WAN and its origins, how we used to connect from the wide area network (WAN) to other branches via private or public links.
|
||||
|
||||
SD-WAN offers the ability to connect your organization to a WAN. This could be connecting to the Internet or other branches, to optimally deliver applications with a good user-experience. Essentially, SD-WAN allows the organizations to design the architecture of their network dynamically by means of software.
|
||||
|
||||
### In the beginning, there was IPSec
|
||||
|
||||
It started with IPSec. Around two decades back, in 1995, the popular design was that of mesh architecture. As a result, we had a lot of point-to-point connections. Firstly, mesh architectures with IPSec VPNs are tiresome to manage as there is a potential for 100s of virtual private network (VPN) configurations.
|
||||
|
||||
Authentically, IPSec started with two fundamental goals. The first was the tunneling protocol that would allow organizations to connect the users or other networks to their private network. This enabled the enterprises to connect to networks that they did not have a direct route to.
|
||||
|
||||
The second goal of IPSec was to encrypt packets at the network layer and therefore securing the data in motion. Let’s face it: at that time, IPSec was terrible for complicated multi-site interconnectivity and high availability designs. If left to its defaults, IPSec is best suited for static designs.
|
||||
|
||||
This was the reason why we had to step in the next era where additional functionality was added to IPSec. For example, IPSec had issues in supporting routing protocols using multicast. To overcome this, IPSec over generic routing encapsulation (GRE) was introduced.
|
||||
|
||||
### The next era of SD-WAN
|
||||
|
||||
During the journey to 2008, one could argue that the next era of WAN connectivity was when additional features were added to IPSec. At this time IPSec became known as a “Swiss army knife.” It could do many things but not anything really well.
|
||||
|
||||
Back then, you could create multiple links, but it failed to select the traffic over these links other than by using simple routing. You needed to add a routing protocol. For advanced agile architectures, IPSec had to be incorporated with other higher-level protocols.
|
||||
|
||||
Features were then added based on measuring the quality. Link quality features were added to analyze any delay, drops and to select alternative links for your applications. We began to add tunnels, multi-links and to select the traffic based on the priority, not just based on the routing.
|
||||
|
||||
The most common way to the tunnel was to have IPSec over GRE. You have the GRE tunnel that enables you to send any protocol end-to-end by using IPSec for the encryption. All this functionality was added to achieve and create dynamic tunnels over IPSec and to optimize the IPSec tunnels.
|
||||
|
||||
This was a move in the right direction, but it was still complex. It was not centrally managed and was error-prone with complex configurations that were unable to manage large deployments. IPSec had far too many limitations, so in the mid-2000s early SD-WAN vendors started cropping up. Some of these vendors enabled the enterprises to aggregate many separate digital subscriber lines (DSL) links into one faster logical link. At the same time, others added time stamps and/or sequence numbers to packets to improve the network performance and security when running over best effort (internet) links.
|
||||
|
||||
International WAN connectivity was a popular focus since the cost delta between the Internet and private multiprotocol label switching (MPLS) was 10x+ different. Primarily, enterprises wanted the network performance and security of MPLS without having to pay a premium for it.
|
||||
|
||||
Most of these solutions sat in-front or behind a traditional router from companies like Cisco. Evidently, just like WAN Optimization vendors, these were additional boxes/solutions that enterprises added to their networks.
|
||||
|
||||
### The next era of SD-WAN, circa 2012
|
||||
|
||||
It was somewhere in 2012 that we started to see the big rush to the SD-WAN market. Vendors such as Velocloud, Viptela and a lot of the other big players in the SD-WAN market kicked off with the objective of claiming some of the early SD-WAN success and going after the edge router market with a full feature router replacement and management simplicity.
|
||||
|
||||
Open source networking software and other open source components for managing the traffic enabled these early SD-WAN vendors to lay a foundation where a lot of the code base was open source. They would then “glue” it together and add their own additional features.
|
||||
|
||||
Around this time, Intel was driving data plane development kit (DPDK) and advanced encryption standard (AES) instruction set, which enabled that software to run on commodity hardware. The SD-WAN solutions were delivered as closed solutions where each solution used its own set of features. The features and technologies chosen for each vendor were different and not compatible with each other.
|
||||
|
||||
### The recent era of SD-WAN, circa 2017
|
||||
|
||||
A tipping point in 2017 was the gold rush for SD-WAN deployments. Everyone wanted to have SD-WAN as fast as possible.
|
||||
|
||||
The SD-WAN market has taken off, as seen by 50 vendors with competing, proprietary solutions and market growth curves with a CAGR of 100%. There is a trend of big vendors like Cisco, Vmware and Oracle acquiring startups to compete in this new market.
|
||||
|
||||
As a result, Cisco, which is the traditional enterprise market leader in WAN routing solutions felt threatened since its IWAN solution, which had been around since 2008, was too complex (a 900-page configuration and operations manual). Besides, its simple solution based on the Meraki acquisition was not feature-rich enough for the large enterprises.
|
||||
|
||||
With their acquisition of Viptela, Cisco currently has a 13% of the market share, and for the first time in decades, it is not the market leader. The large cloud vendors, such as Google and Facebook are utilizing their own technology for routing within their very large private networks.
|
||||
|
||||
At some point between 2012 and 2017, we witnessed the service providers adopting SD-WAN. This introduced the onset and movement of managed SD-WAN services. As a result, the service providers wanted to have SD-WAN on the menu for their customers. But there were many limitations in the SD-WAN market, as it was offered as a closed-box solution, giving the service providers limited control.
|
||||
|
||||
At this point surfaced an expectation of change, as service providers and enterprises looked for more control. Customers can get better functionality from a multi-vendor approach than from a single vendor.
|
||||
|
||||
### Don’t forget DIY SD-WAN
|
||||
|
||||
Up to 60% of service providers and enterprises within the USA are now looking at DIY SD-WAN. A DIY SD-WAN solution is not where the many pieces of open source are taken and caste into something. The utmost focus is on the solution that can be self-managed but buy from a vendor.
|
||||
|
||||
Today, the majority of the market is looking for managed solutions and the upper section that has the expertise wants to be equipped with more control options.
|
||||
|
||||
### SD-WAN vendors attempting everything
|
||||
|
||||
There is a trend that some vendors try to do everything with SD-WAN. As a result, whether you are an enterprise or a service provider, you are locked into a solution that is dictated by the SD-WAN vendor.
|
||||
|
||||
The SD-WAN vendors have made the supplier choice or developed what they think is relevant. Evidently, some vendors are using stacks and software development kits (SDKs) that they purchased, for example, for deep packet inspection (DPI).
|
||||
|
||||
Ultimately, you are locked into a specific solution that the vendor has chosen for you. If you are a service provider, you might disapprove of this limitation and if you are an enterprise with specific expertise, you might want to zoom in for more control.
|
||||
|
||||
### All-in-one security vendors
|
||||
|
||||
Many SD-WAN vendors promote themselves as security companies. But would you prefer to buy a security solution from an SD-WAN vendor or from an experienced vendor, such as Checkpoint?
|
||||
|
||||
Both: enterprise and service providers want to have a choice, but with an integrated black box security solution, you don't have a choice. The more you integrate and throw into the same box, the stronger the vendor lock-in is and the weaker the flexibility.
|
||||
|
||||
Essentially, with this approach, you are going for the lowest common denominator instead of the highest. Ideally, the technology of the services that you deploy on your network requires expertise. One vendor cannot be an expert in everything.
|
||||
|
||||
An open architecture lies in a place for experts in different areas to join together and add their own specialist functionality.
|
||||
|
||||
### Encrypted traffic
|
||||
|
||||
As a matter of fact, what is not encrypted today will be encrypted tomorrow. The vendor of the application can perform intelligent things that the SD-WAN vendor cannot because they control both sides. Hence, if you can put something inside the SD-WAN edge device, they can make smart decisions even if the traffic is encrypted.
|
||||
|
||||
But in the case of traditional SD-WANs, there needs to be a corporation with a content provider. However, with an open architecture, you can integrate anything, and nothing prevents the integration. A lot of traffic is encrypted and it's harder to manage encrypted traffic. However, an open architecture would allow the content providers to manage the traffic more effectively.
|
||||
|
||||
### 2019 and beyond: what is an open architecture?
|
||||
|
||||
Cloud providers and enterprises have discovered that 90% of the user experience and security problems arise due to the network: between where the cloud provider resides and where the end-user consumes the application.
|
||||
|
||||
Therefore, both cloud providers and large enterprise with digital strategies are focusing on building their solutions based on open source stacks. Having a viable open source SD-WAN solution is the next step in the SD-WAN evolution, where it moves to involve the community in the solution. This is similar to what happens with containers and tools.
|
||||
|
||||
Now, since we’re in 2019, are we going to witness a new era of SD-WAN? Are we moving to the open architecture with an open source SD-WAN solution? An open architecture should be the core of the SD-WAN infrastructure, where additional technologies are integrated inside the SD-WAN solution and not only complementary VNFs. There is an interface and native APIs that allow you to integrate logic into the router. This way, the router will be able to intercept and act according to the traffic.
|
||||
|
||||
So, if I’m a service provider and have my own application, I would want to write logic that would be able to communicate with my application. Without an open architecture, the service providers can’t really offer differentiation and change the way SD-WAN makes decisions and interacts with the traffic of their applications.
|
||||
|
||||
There is a list of various technologies that you need to be an expert in to be able to integrate. And each one of these technologies can be a company, for example, DPI, VoIP optimization, and network monitoring to name a few. An open architecture will allow you to pick and choose these various elements as per your requirements.
|
||||
|
||||
Networking is going through a lot of changes and it will continue to evolve with the passage of time. As a result, you wouldn’t want something that forces you to break out a forklift each time new technologies are introduced. Primarily, open architecture allows you to replace the components of the system and add code or elements that handle specific traffic and applications.
|
||||
|
||||
### Open source
|
||||
|
||||
Open source gives you more flexibility to control your own destiny. It offers the ability to select your own services that you want to be applied to your system. It provides security in the sense that if something happens to the vendor or there is a vulnerability in the system, you know that you are backed by the community that can fix such misadventures.
|
||||
|
||||
From the perspective of the business model, it makes a more flexible and cost-effective system. Besides, with open source, the total cost of ownership will also be lower.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][7]**
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/03/6554314981_7f95641814_o-100714680-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3338143/the-role-of-open-source-in-networking.html
|
||||
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[6]: https://flexiwan.com/sd-wan-open-source/
|
||||
[7]: /contributor-network/signup.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,70 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How data storage will shift to blockchain)
|
||||
[#]: via: (https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
How data storage will shift to blockchain
|
||||
======
|
||||
Move over cloud and traditional in-house enterprise data center storage, distributed storage based on blockchain may be arriving imminently.
|
||||
![Cybrain / Getty Images][1]
|
||||
|
||||
If you thought cloud storage was digging in its heels to become the go-to method for storing data, and at the same time grabbing share from own-server, in-house storage, you may be interested to hear that some think both are on the way out. Instead organizations will use blockchain-based storage.
|
||||
|
||||
Decentralized blockchain-based file storage will be more secure, will make it harder to lose data, and will be cheaper than anything seen before, say organizations actively promoting the slant on encrypted, distributed technology.
|
||||
|
||||
**[ Read also:[Why blockchain (might be) coming to an IoT implementation near you][2] ]**
|
||||
|
||||
### Storing transactional data in a blockchain
|
||||
|
||||
China company [FileStorm][3], which describes itself in marketing materials as the first [Interplanetary File Storage][4] (IPFS) platform on blockchain, says the key to making it all work is to only store the transactional data in blockchain. The actual data files, such as large video files, are distributed in IPFS.
|
||||
|
||||
IPFS is a distributed, peer-to-peer file storage protocol. File parts come from multiple computers all at the same time, supposedly making the storage hardy. FileStorm adds blockchain on top of it for a form of transactional indexing.
|
||||
|
||||
“Blockchain is designed to store transactions forever, and the data can never be altered, thus a trustworthy system is created,” says Raymond Fu, founder of FileStorm and chief product officer of MOAC, the underlying blockchain system used, in a video on the FileStorm website.
|
||||
|
||||
“The blocks are used to store only small transactional data,” he says. You can’t store the large files on it. Those are distributed. Decentralized data storage platforms are needed for added decentralized blockchain, he says.
|
||||
|
||||
YottaChain, another blockchain storage start-up project is coming at the whole thing from a slightly different angle. It claims its non-IPFS system is more secure partly because it performs deduplication after encryption.
|
||||
|
||||
“Data is 10,000 times more secure than [traditional] centralized storage,” it says on its [website][5]. Deduplication eliminates duplicated or redundant data.
|
||||
|
||||
### Disrupting data storage
|
||||
|
||||
“Blockchain will disrupt data storage,” [says BlockApps separately][6]. The blockchain backend platform provider says advantages to this new generation of storage include that decentralizing data provides more security and privacy. That's due in part because it's harder to hack than traditional centralized storage. That the files are spread piecemeal among nodes, conceivably all over the world, makes it impossible for even the participating node to view the contents of the complete file, it says.
|
||||
|
||||
Sharding, which is the term for the breaking apart and node-spreading of the actual data, is secured through keys. Markets can award token coins for mining, and coins can be spent to gain storage. Excess storage can even be sold. And cryptocurrencies have been started to “incentivize usage and to create a market for buying and selling decentralized storage,” BlockApps explains.
|
||||
|
||||
The final parts of this new storage mix are that lost files are minimized because data can be duplicated simply — the data sets, for example, can be stored multiple times for error correction — and costs are reduced due to efficiencies.
|
||||
|
||||
Square Tech (Shenzhen) Co., which makes blockchain file storage nodes, says in its marketing materials that it intends to build service centers globally to monitor its nodes in real time. Interestingly, another area the company has gotten involved in is the internet of things (IoT), and [it says][7] it wants “to unite the technical resources, capital, and human resources of the IoT industry and blockchain.” Perhaps we end up with a form of the internet of storage things?
|
||||
|
||||
“The entire cloud computing industry will be disrupted by blockchain technology in just a few short years,” says BlockApps. Dropbox and Amazon “may even become overpriced and obsolete if they do not find ways to integrate with the advances.”
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_cybrain_gettyimages-926677890_2400x1600-100788435-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html
|
||||
[3]: http://filestorm.net/
|
||||
[4]: https://ipfs.io/
|
||||
[5]: https://www.yottachain.io/
|
||||
[6]: https://blockapps.net/blockchain-disrupt-data-storage/
|
||||
[7]: http://www.sikuaikeji.com/
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (No, drone delivery still isn’t ready for prime time)
|
||||
[#]: via: (https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
No, drone delivery still isn’t ready for prime time
|
||||
======
|
||||
Despite incremental progress and limited regulatory approval in the U.S. and Australia, drone delivery still isn’t a viable option in the vast majority of use cases.
|
||||
![Sorry imKirk \(CC0\)][1]
|
||||
|
||||
April has a been a big month for drone delivery. First, [Alphabet’s Wing Aviation drones got approval from Australia’s Civil Aviation Safety Authority (CASA)][2], for public deliveries in the country, and this week [Wing earned Air Carrier Certification from the U.S. Federal Aviation Administration][3]. These two regulatory wins got lot of people got very excited. Finally, the conventional wisdom exulted, drone delivery is actually becoming a reality.
|
||||
|
||||
Not so fast.
|
||||
|
||||
### Drone delivery still in pilot/testing mode
|
||||
|
||||
Despite some small-scale successes and the first signs of regulatory acceptance, drone delivery remains firmly in the carefully controlled pilot/test phase (and yes, I know drones don’t carry pilots).
|
||||
|
||||
**[ Also read:[Coffee drone delivery: Ideas this bad could kill the internet of things][4] ]**
|
||||
|
||||
For example, despite getting FAA approval to begin commercial deliveries, Wing is still working up beginning delivery trials to test the technology and gather information in Virginia later this year.
|
||||
|
||||
But what about that public approval from CASA for deliveries outside Canberra? That’s [a real business][5] now, right?
|
||||
|
||||
Well, yes and no.
|
||||
|
||||
On the “yes” side, the Aussie approval reportedly came after 18 months of tests, 70,000 flights, and more than 3,000 actual deliveries of products from local coffee shops and pharmacies. So, at least some people somewhere in the world are actually getting stuff dropped at their doors by drones.
|
||||
|
||||
In the “no” column, however, goes the fact that the approval covers only about 100 suburban homes, though more are planned to be added “in the coming weeks and months.” More importantly, the approval includes strict limits on when and where the drones can go. No crossing main roads, no nighttime deliveries, and prohibitions to stay away from people. And drone-eligible customers need a safety briefing!
|
||||
|
||||
### Safety briefings required for customers
|
||||
|
||||
That still sounds like a small-scale test, not a full-scale commercial roll-out. And while I think drone-safety classes are probably a good idea – and the testing period apparently passed without any injuries – even the perceived _need_ for them is not be a great advertisement for rapid expansion of drone deliveries.
|
||||
|
||||
Ironically, though, a bigger issue than protecting people from the drones, perhaps, is protecting the drones from people. Instructions to stay 2 to 5 meters away from folks will help, but as I’ve previously addressed, these things are already seen as attractive nuisances and vandalism targets. Further raising the stakes, many local residents were said to be annoyed at the noise created by the drones. Now imagine those contraptions buzzing right by you all loaded down with steaming hot coffee or delicious ice cream.
|
||||
|
||||
And even with all those caveats, no one is talking about the key factors in making drone deliveries a viable business: How much will those deliveries cost and who will pay? For a while, the desire to explore the opportunity will drive investment, but that won’t last forever. If drone deliveries aren’t cost effective for businesses, they won’t spread very far.
|
||||
|
||||
From the customer perspective, most drone delivery tests are not yet charging for the service. If and when they start carrying fees as well as purchases, the novelty factor will likely entice many shoppers to pony up to get their items airlifted directly to their homes. But that also won’t last. Drone delivery will have to demonstrate that it’s better — faster, cheaper, or more reliable — than the existing alternatives to find its niche.
|
||||
|
||||
### Drone deliveries are fast, commercial roll-out will be slow
|
||||
|
||||
Long term, I have no doubt that drone delivery will eventually become an accepted part of the transportation infrastructure. I don’t necessarily buy into Wing’s prediction of an AU $40 million drone delivery market in Australia coming to pass anytime soon, but real commercial operations seem inevitable.
|
||||
|
||||
It’s just going to be more limited than many proponents claim, and it’s likely to take a lot longer than expected to become mainstream. For example, despite ongoing testing, [Amazon has already missed Jeff Bezos’ 2018 deadline to begin commercial drone deliveries][6], and we haven’t heard much about [Walmart’s drone delivery plans][7] lately. And while tests by a number of companies continue in locations ranging from Iceland and Finland to the U.K. and the U.S. have created a lot of interest, they have not yet translated into widespread availability.
|
||||
|
||||
Apart from the issue of how much consumers really want their stuff delivered by an armada of drones (a [2016 U.S. Post Office study][8] found that 44% of respondents liked the idea, while 34% didn’t — and 37% worried that drone deliveries might not be safe), a lot has to happen before that vision becomes reality.
|
||||
|
||||
At a minimum, successful implementations of full-scale commercial drone delivery will require better planning, better-thought-out business cases, more rugged and efficient drone technology, and significant advances in flight control and autonomous flight. Like drone deliveries themselves, all that stuff is coming; it just hasn’t arrived yet.
|
||||
|
||||
**More about drones and the internet of things:**
|
||||
|
||||
* [Drone defense -- powered by IoT -- is now a thing][9]
|
||||
* [Ideas this bad could kill the Internet of Things][4]
|
||||
* [10 reasons Amazon's drone delivery plan still won't fly][10]
|
||||
* [Amazon’s successful drone delivery test doesn’t really prove anything][11]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/drone_mountains_by_sorry_imkirk_cc0_via_unsplash_1200x800-100763763-large.jpg
|
||||
[2]: https://medium.com/wing-aviation/wing-launches-commercial-air-delivery-service-in-canberra-5da134312474
|
||||
[3]: https://medium.com/wing-aviation/wing-becomes-first-certified-air-carrier-for-drones-in-the-us-43401883f20b
|
||||
[4]: https://www.networkworld.com/article/3301277/ideas-this-bad-could-kill-the-internet-of-things.html
|
||||
[5]: https://wing.com/australia/canberra
|
||||
[6]: https://www.businessinsider.com/jeff-bezos-predicted-amazon-would-be-making-drone-deliveries-by-2018-2018-12?r=US&IR=T
|
||||
[7]: https://www.networkworld.com/article/2999828/walmart-delivery-drone-plans.html
|
||||
[8]: https://www.uspsoig.gov/sites/default/files/document-library-files/2016/RARC_WP-17-001.pdf
|
||||
[9]: https://www.networkworld.com/article/3309413/drone-defense-powered-by-iot-is-now-a-thing.html
|
||||
[10]: https://www.networkworld.com/article/2900317/10-reasons-amazons-drone-delivery-plan-still-wont-fly.html
|
||||
[11]: https://www.networkworld.com/article/3185478/amazons-successful-drone-delivery-test-doesnt-really-prove-anything.html
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -1,412 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Profiling D's Garbage Collection with Bpftrace)
|
||||
[#]: via: (https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Profiling D's Garbage Collection with Bpftrace
|
||||
======
|
||||
|
||||
Recently I’ve been playing around with using [`bpftrace`][1] to trace and profile D’s garbage collector. Here are some examples of the cool stuff that’s possible.
|
||||
|
||||
### What is `bpftrace`?
|
||||
|
||||
It’s a high-level debugging tool based on Linux’s eBPF. “eBPF” stands for “extended Berkely packet filter”, but that’s just a historical name and doesn’t mean much today. It’s really a virtual machine (like the [JVM][2]) that sits inside the Linux kernel and runs code in a special eBPF instruction set similar to normal machine code. Users are expected to write short programs in high-level languages (including C and others) that get compiled to eBPF and loaded into the kernel on the fly to do interesting things.
|
||||
|
||||
As you might guess, eBPF is powerful for instrumenting a running kernel, but it also supports instrumenting user-space programs.
|
||||
|
||||
### What you need
|
||||
|
||||
First you need a Linux kernel. Sorry BSD, Mac OS and Windows users. (But some of you can use [DTrace][3].)
|
||||
|
||||
Also, not just any Linux kernel will work. This stuff is relatively new, so you’ll need a modern kernel with BPF-related features enabled. You might need to use the newest (or even testing) version of a distro. Here’s how to check if your kernel meets the requirements:
|
||||
|
||||
```
|
||||
$ uname -r
|
||||
4.19.27-gentoo-r1sub
|
||||
$ # 4.9+ recommended by bpftrace
|
||||
$ zgrep CONFIG_UPROBES /proc/config.gz
|
||||
CONFIG_UPROBES=y
|
||||
$ # Also need
|
||||
$ # CONFIG_BPF=y
|
||||
$ # CONFIG_BPF_SYSCALL=y
|
||||
$ # CONFIG_BPF_JIT=y
|
||||
$ # CONFIG_HAVE_EBPF_JIT=y
|
||||
$ # CONFIG_BPF_EVENTS=y
|
||||
```
|
||||
|
||||
Of course, [you also need to install the `bpftrace` tool itself][4].
|
||||
|
||||
### `bpftrace` D “Hello World”
|
||||
|
||||
Here’s a quick test you can do to make sure you’ve got everything working. First, let’s make a Hello World D binary:
|
||||
|
||||
```
|
||||
$ pwd
|
||||
/tmp/
|
||||
$ cat hello.d
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
writeln("Hello World");
|
||||
}
|
||||
$ dmd hello.d
|
||||
$ ./hello
|
||||
Hello World
|
||||
$
|
||||
```
|
||||
|
||||
Now let’s `bpftrace` it. `bpftrace` uses a high-level language that’s obviously inspired by AWK. I’ll explain enough to understand the post, but you can also check out the [`bpftrace` reference guide][5] and [one-liner tutorial][6]. The minimum you need to know is that a bpftrace program is a list of `event:name /filter predicate/ { program(); code(); }` blocks that define code snippets to be run on events.
|
||||
|
||||
This time I’m only using Linux uprobes, which trigger on functions in user-space programs. The syntax is `uprobe:/path/to/binary:functionName`. One gotcha is that D “[mangles][7]” (encodes) function names before inserting them into the binary. If we want to trigger on the D code’s `main()` function, we need to use the mangled name: `_Dmain`. (By the way, `nm program | grep ' _D.*functionName'` is one quick trick for finding mangled names.)
|
||||
|
||||
Run this `bpftrace` invocation in a terminal as root user:
|
||||
|
||||
```
|
||||
# bpftrace -e 'uprobe:/tmp/hello:_Dmain { printf("D Hello World run with process ID %d\n", pid); }'
|
||||
Attaching 1 probe...
|
||||
```
|
||||
|
||||
While this is running, it’ll print a message every time the D Hello World program is executed by any user in any terminal. Press `Ctrl+C` to quit.
|
||||
|
||||
All `bpftrace` code can be run directly from the command line like in the example above. But to make things easier to read from now on, I’ll make neatly formatted scripts.
|
||||
|
||||
### Tracing some real code
|
||||
|
||||
I’m using [D-Scanner][8], the D code analyser, as an example of a simple but non-trivial D workload. One nice thing about `bpftrace` and uprobes is that no modification of the program is needed. I’m just using a normal build of the `dscanner` tool, and using the [D runtime source code][9] as a codebase to analyse.
|
||||
|
||||
Before using `bpftrace`, let’s try using [the profiling that’s built into the D GC implementation itself][10]:
|
||||
|
||||
```
|
||||
$ dscanner --DRT-gcopt=profile:1 --etags
|
||||
...
|
||||
Number of collections: 85
|
||||
Total GC prep time: 0 milliseconds
|
||||
Total mark time: 17 milliseconds
|
||||
Total sweep time: 6 milliseconds
|
||||
Total page recovery time: 3 milliseconds
|
||||
Max Pause Time: 1 milliseconds
|
||||
Grand total GC time: 28 milliseconds
|
||||
GC summary: 35 MB, 85 GC 28 ms, Pauses 17 ms < 1 ms
|
||||
```
|
||||
|
||||
(If you can make a custom build, you can also use [the D runtime GC API to get stats][11].)
|
||||
|
||||
There’s one more gotcha when using `bpftrace` on `dscanner` to trace GC functions: the binary file we specify for the uprobe needs to be the binary file that actually contains the GC functions. That could be the D binary itself, or it could be a shared D runtime library. Try running `ldd /path/to/d_program` to list any linked shared libraries, and if the output contains `druntime`, use that full path when specifying uprobes. My `dscanner` binary doesn’t link to a shared D runtime, so I just use the full path to `dscanner`. (Running `which dscanner` gives `/usr/local/bin/dscanner` for me.)
|
||||
|
||||
Anyway, all the GC functions live in a `gc` module, so their mangled names start with `_D2gc`. Here’s a `bpftrace` invocation that tallies GC function calls. For convenience, it also includes a uretprobe to automatically exit when `main()` returns. The output is sorted to make it a little easier to read.
|
||||
|
||||
```
|
||||
# cat dumpgcfuncs.bt
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc*
|
||||
{
|
||||
@[probe] = count();
|
||||
}
|
||||
|
||||
uretprobe:/usr/local/bin/dscanner:_Dmain
|
||||
{
|
||||
exit();
|
||||
}
|
||||
# bpftrace dumpgcfuncs.bt | sort
|
||||
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC10freeNoSyncMFNbNiPvZv]: 31
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC10initializeFKCQCd11gcinterface2GCZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC11queryNoSyncMFNbPvZS4core6memory8BlkInfo_]: 44041
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC11removeRangeMFNbNiPvZv]: 2
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC12extendNoSyncMFNbPvmmxC8TypeInfoZm]: 251946
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC14collectNoStackMFNbZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC18fullCollectNoStackMFNbZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC4freeMFNbNiPvZv]: 31
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC5queryMFNbPvZS4core6memory8BlkInfo_]: 47704
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6__ctorMFZCQBzQBzQBxQCiQBn]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6callocMFNbmkxC8TypeInfoZPv]: 80
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6extendMFNbPvmmxC8TypeInfoZm]: 251946
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6mallocMFNbmkxC8TypeInfoZPv]: 12423
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_]: 948995
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC7getAttrMFNbPvZ2goFNbPSQClQClQCjQCu3GcxQBbZk]: 5615
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC7getAttrMFNbPvZk]: 5615
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC8addRangeMFNbNiPvmxC8TypeInfoZv]: 2
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs10freeNoSyncMFNbNiPvZvS_DQDsQDsQDqQEb8freeTimelS_DQErQErQEpQFa8numFreeslTQCdZQEbMFNbNiKQCrZv]: 31
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs11queryNoSyncMFNbPvZS4core6memory8BlkInfo_S_DQEmQEmQEkQEv9otherTimelS_DQFmQFmQFkQFv9numOtherslTQDaZQExMFNbKQDmZQDn]: 44041
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs12extendNoSyncMFNbPvmmxC8TypeInfoZmS_DQEfQEfQEdQEo10extendTimelS_DQFhQFhQFfQFq10numExtendslTQCwTmTmTxQDaZQFdMFNbKQDrKmKmKxQDvZm]: 251946
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs12mallocNoSyncMFNbmkKmxC8TypeInfoZPvS_DQEgQEgQEeQEp10mallocTimelS_DQFiQFiQFgQFr10numMallocslTmTkTmTxQCzZQFcMFNbKmKkKmKxQDsZQDl]: 961498
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs18fullCollectNoStackMFNbZ2goFNbPSQEaQEaQDyQEj3GcxZmTQvZQDfMFNbKQBgZm]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs7getAttrMFNbPvZ2goFNbPSQDqQDqQDoQDz3GcxQBbZkS_DQEoQEoQEmQEx9otherTimelS_DQFoQFoQFmQFx9numOtherslTQCyTQDlZQFdMFNbKQDoKQEbZk]: 5615
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool10allocPagesMFNbmZm]: 5597
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool13updateOffsetsMFNbmZv]: 10745
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool7getInfoMFNbPvZS4core6memory8BlkInfo_]: 3844
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15SmallObjectPool7getInfoMFNbPvZS4core6memory8BlkInfo_]: 40197
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15SmallObjectPool9allocPageMFNbhZPSQChQChQCfQCq4List]: 15022
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx10smallAllocMFNbhKmkZ8tryAllocMFNbZb]: 955967
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx10smallAllocMFNbhKmkZPv]: 955912
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11ToScanStack4growMFNbZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm]: 85
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11removeRangeMFNbNiPvZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx23updateCollectThresholdsMFNbZv]: 84
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv]: 253
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx5sweepMFNbZm]: 84
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7markAllMFNbbZ14__foreachbody3MFNbKSQCm11gcinterface5RangeZi]: 85
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7markAllMFNbbZv]: 85
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7newPoolMFNbmbZPSQBtQBtQBrQCc4Pool]: 6
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7recoverMFNbZm]: 84
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8addRangeMFNbNiPvQcxC8TypeInfoZv]: 2
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZ15tryAllocNewPoolMFNbZb]: 5
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZ8tryAllocMFNbZb]: 5616
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZPv]: 5586
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8isMarkedMFNbNlPvZi]: 635
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx9allocPageMFNbhZPSQBuQBuQBsQCd4List]: 15024
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw4Pool10initializeMFNbmbZv]: 6
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw4Pool12freePageBitsMFNbmKxG4mZv]: 16439
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC4termMFZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC8addRangeMFNbNiPvmxC8TypeInfoZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc4impl6manualQp8ManualGC10initializeFKCQBp11gcinterface2GCZv]: 1
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr6insertMFNbNiPQBxZb]: 6
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr8findPoolMFNaNbNiPvZPQCe]: 302268
|
||||
@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr8minimizeMFNaNbNjZAPQCd]: 30
|
||||
Attaching 231 probes...
|
||||
```
|
||||
|
||||
All these functions are in [`src/gc/`][12], and most of the interesting ones here are in [`src/gc/impl/conservative/`][13]. There are 85 calls to `_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm`, which [`ddemangle`][14] translates to `nothrow ulong gc.impl.conservative.gc.Gcx.fullcollect(bool)`. That matches up with the report from `--DRT-gcopt=profile:1`.
|
||||
|
||||
The heart of the `bpftrace` program is `@[probe] = count();`. `@` prefixes a global variable, in this case a variable with an empty name (allowed by `bpftrace`). We’re using the variable as a map (like an associative array in D), and indexing it with `probe`, a built-in variable containing the name of the uprobe that was triggered. The tally is kept using the magic `count()` function.
|
||||
|
||||
### Garbage collection timings
|
||||
|
||||
How about something more interesting, like generating a profile of collection timings? This time, to get more data, I won’t make `bpftrace` exit as soon as the `dscanner` exits. I’ll keep it running and run `dscanner` 100 times before quitting `bpftrace` with `Ctrl+C`:
|
||||
|
||||
```
|
||||
# cat gcprofile.bt
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm
|
||||
{
|
||||
@t = nsecs;
|
||||
}
|
||||
|
||||
uretprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm / @t /
|
||||
{
|
||||
@gc_times = hist(nsecs - @t);
|
||||
}
|
||||
# bpftrace gcprofile.bt
|
||||
Attaching 2 probes...
|
||||
^C
|
||||
|
||||
@gc_times:
|
||||
[64K, 128K) 138 |@ |
|
||||
[128K, 256K) 1367 |@@@@@@@@@@ |
|
||||
[256K, 512K) 6687 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
|
||||
[512K, 1M) 7 | |
|
||||
[1M, 2M) 301 |@@ |
|
||||
```
|
||||
|
||||
Et voila! A log-scale histogram of the `nsecs` timestamp difference between entering and exiting `fullcollect()`. The times are in nanoseconds, so we see that most collections are taking less than half a millisecond, but we have tail cases that take 1-2ms.
|
||||
|
||||
### Function arguments
|
||||
|
||||
`bpftrace` provides `arg0`, `arg1`, `arg2`, etc. built-in variables for accessing the arguments to a traced function. There are a couple of complications with using them with D code, however.
|
||||
|
||||
The first is that (at the binary level) `dmd` makes `extern(D)` functions (i.e., normal D functions) take arguments in the reverse order of `extern(C)` functions (that `bpftrace` is expecting). Suppose you have a simple three-argument function. If it’s using the C calling convention, `bpftrace` will recognise the first argument as `arg0`. If it’s using the D calling convention, however, it’ll be picked up as `arg2`.
|
||||
|
||||
```
|
||||
extern(C) void cfunc(int arg0, int arg1, int arg2)
|
||||
{
|
||||
// ...
|
||||
}
|
||||
|
||||
// (extern(D) is the default)
|
||||
extern(D) void dfunc(int arg2, int arg1, int arg0)
|
||||
{
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
If you look at [the D ABI spec][15], you’ll notice that (just like in C++) there can be a couple of hidden arguments if the function is more complex. If `dfunc` above returned a large struct, there can be an extra hidden argument for implementing [copy elision][16], which means the first argument would actually be `arg3`, and `arg0` would be the hidden argument. If `dfunc` were also a member function, it would have a hidden `this` argument, which would bump up the first argument to `arg4`.
|
||||
|
||||
To get the hang of this, you might need to experiment with tracing function calls with known arguments.
|
||||
|
||||
### Allocation sizes
|
||||
|
||||
Let’s get a histogram of the memory allocation request sizes. Looking at the list of GC functions traced earlier, and comparing it with the GC source code, it looks like we need to trace these functions and grab the `size` argument:
|
||||
|
||||
```
|
||||
class ConservativeGC : GC
|
||||
{
|
||||
// ...
|
||||
void *malloc(size_t size, uint bits, const TypeInfo ti) nothrow;
|
||||
void *calloc(size_t size, uint bits, const TypeInfo ti) nothrow;
|
||||
BlkInfo qalloc( size_t size, uint bits, const TypeInfo ti) nothrow;
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
As class member functions, they have a hidden `this` argument as well. The last one, `qalloc()`, returns a struct, so it also has a hidden argument for copy elision. So `size` is `arg3` for the first two functions, and `arg4` for `qalloc()`. Time to run a trace:
|
||||
|
||||
```
|
||||
# cat allocsizeprofile.bt
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6mallocMFNbmkxC8TypeInfoZPv,
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6callocMFNbmkxC8TypeInfoZPv
|
||||
{
|
||||
@ = hist(arg3);
|
||||
}
|
||||
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_
|
||||
{
|
||||
@ = hist(arg4);
|
||||
}
|
||||
|
||||
uretprobe:/usr/local/bin/dscanner:_Dmain
|
||||
{
|
||||
exit();
|
||||
}
|
||||
# bpftrace allocsizeprofile.bt
|
||||
Attaching 4 probes...
|
||||
@:
|
||||
[2, 4) 2489 | |
|
||||
[4, 8) 9324 |@ |
|
||||
[8, 16) 46527 |@@@@@ |
|
||||
[16, 32) 206324 |@@@@@@@@@@@@@@@@@@@@@@@ |
|
||||
[32, 64) 448020 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
|
||||
[64, 128) 147053 |@@@@@@@@@@@@@@@@@ |
|
||||
[128, 256) 88072 |@@@@@@@@@@ |
|
||||
[256, 512) 2519 | |
|
||||
[512, 1K) 1830 | |
|
||||
[1K, 2K) 3749 | |
|
||||
[2K, 4K) 1668 | |
|
||||
[4K, 8K) 256 | |
|
||||
[8K, 16K) 2533 | |
|
||||
[16K, 32K) 312 | |
|
||||
[32K, 64K) 239 | |
|
||||
[64K, 128K) 209 | |
|
||||
[128K, 256K) 164 | |
|
||||
[256K, 512K) 124 | |
|
||||
[512K, 1M) 48 | |
|
||||
[1M, 2M) 30 | |
|
||||
[2M, 4M) 7 | |
|
||||
[4M, 8M) 1 | |
|
||||
[8M, 16M) 2 | |
|
||||
```
|
||||
|
||||
So, we have a lot of small allocations, with a very long tail of larger allocations. Remember, size is on a log scale, so that long tail represents a very skewed distribution.
|
||||
|
||||
### Small allocation hotspots
|
||||
|
||||
Now for something more complex. Suppose we’re profiling our code and looking for low-hanging fruit for reducing the number of memory allocations. Code that makes a lot of small allocations tends to be a good candidate for this kind of refactoring. `bpftrace` lets us grab stack traces, which can be used to see what part of the main program caused an allocation.
|
||||
|
||||
As of writing, there’s one little complication because of a limitation of `bpftrace`’s stack trace handling: it can only show meaningful function symbol names (as opposed to raw memory addresses) if `bpftrace` quits while the target program is still running. There’s [an open bug report for improving this behaviour][17], but in the meantime I just made sure `dscanner` took a long time, and that I shut down `bpftrace` first.
|
||||
|
||||
Here’s how to grab the top three stack traces that lead to small (<16B) memory allocations with `qalloc()`:
|
||||
|
||||
```
|
||||
# cat smallallocs.bt
|
||||
uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_
|
||||
{
|
||||
if (arg4 < 16)
|
||||
{
|
||||
@[ustack] = count();
|
||||
}
|
||||
}
|
||||
|
||||
END
|
||||
{
|
||||
print(@, 3);
|
||||
clear(@);
|
||||
}
|
||||
# bpftrace smallallocs.bt
|
||||
Attaching 2 probes...
|
||||
^C@[
|
||||
_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
|
||||
_D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
|
||||
_d_arraysetlengthT+248
|
||||
_D8dscanner8analysis25label_var_same_name_check17LabelVarNameCheck9pushScopeMFZv+29
|
||||
_D8dscanner8analysis25label_var_same_name_check17LabelVarNameCheck9__mixin175visitMFxC6dparse3ast6ModuleZv+21
|
||||
_D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+11343
|
||||
_D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
|
||||
_Dmain+3618
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_d_run_main+463
|
||||
main+16
|
||||
__libc_start_main+235
|
||||
0x41fd89415541f689
|
||||
]: 450
|
||||
@[
|
||||
_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
|
||||
_D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
|
||||
_d_arrayappendcTX+1944
|
||||
_D8dscanner8analysis10unmodified16UnmodifiedFinder9pushScopeMFZv+61
|
||||
_D8dscanner8analysis10unmodified16UnmodifiedFinder5visitMFxC6dparse3ast6ModuleZv+21
|
||||
_D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+11343
|
||||
_D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
|
||||
_Dmain+3618
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_d_run_main+463
|
||||
main+16
|
||||
__libc_start_main+235
|
||||
0x41fd89415541f689
|
||||
]: 450
|
||||
@[
|
||||
_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
|
||||
_D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
|
||||
_d_arrayappendcTX+1944
|
||||
_D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+680
|
||||
_D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
|
||||
_Dmain+3618
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
|
||||
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
|
||||
_d_run_main+463
|
||||
main+16
|
||||
__libc_start_main+235
|
||||
0x41fd89415541f689
|
||||
]: 450
|
||||
```
|
||||
|
||||
It looks like a lot of the small allocations are due to a red-black tree in `ModuleCache`.
|
||||
|
||||
### What’s next?
|
||||
|
||||
I think these examples already show that `bpftrace` is a pretty powerful tool. There’s a lot more that can done, and I highly recommended reading [Brendan Gregg’s eBPF tutorials][18].
|
||||
|
||||
I used uprobes to trace arbitrary functions in the D runtime. The pro of this is the freedom to do anything, but the cons are that I had to refer to the D runtime source code and manually deal with the D ABI. There’s also no guarantee that a script I write today will work with future versions of the runtime. Linux also supports making well-defined tracepoints in user code using a feature called [USDT][19]. That should let D code export stable tracepoints that can be used without worrying about the D ABI. I might do more experiments in future.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/iovisor/bpftrace
|
||||
[2]: https://en.wikipedia.org/wiki/Java_virtual_machine
|
||||
[3]: http://dtrace.org/blogs/about/
|
||||
[4]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
|
||||
[5]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md
|
||||
[6]: https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
|
||||
[7]: https://dlang.org/spec/abi.html#name_mangling
|
||||
[8]: https://github.com/dlang-community/D-Scanner
|
||||
[9]: https://github.com/dlang/druntime/
|
||||
[10]: https://dlang.org/spec/garbage.html#gc_config
|
||||
[11]: https://dlang.org/phobos/core_memory.html#.GC.stats
|
||||
[12]: https://github.com/dlang/druntime/tree/v2.081.1/src/gc
|
||||
[13]: https://github.com/dlang/druntime/tree/v2.081.1/src/gc/impl/conservative
|
||||
[14]: https://github.com/dlang/tools
|
||||
[15]: https://dlang.org/spec/abi.html#parameters
|
||||
[16]: https://en.wikipedia.org/wiki/Copy_elision
|
||||
[17]: https://github.com/iovisor/bpftrace/issues/246
|
||||
[18]: http://www.brendangregg.com/blog/2019-01-01/learn-ebpf-tracing.html
|
||||
[19]: https://lwn.net/Articles/753601/
|
@ -1,73 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Measuring the edge: Finding success with edge deployments)
|
||||
[#]: via: (https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Measuring the edge: Finding success with edge deployments
|
||||
======
|
||||
To make the most of edge computing investments, it’s important to first understand objectives and expectations.
|
||||
![iStock][1]
|
||||
|
||||
Edge computing deployments are well underway as companies seek to better process the wealth of data being generated, for example, by Internet of Things (IoT) devices.
|
||||
|
||||
So, what are the results? Plus, how can you ensure success with your own edge projects?
|
||||
|
||||
**Measurements of success**
|
||||
|
||||
The [use cases for edge computing deployments][2] vary widely, as do the business drivers and, ultimately, the benefits.
|
||||
|
||||
Whether they’re seeking improved network or application performance, real-time data analytics, a better customer experience, or other efficiencies, enterprises are accomplishing their goals. Based on two surveys — one by [_Automation World_][3] and another by [Futurum Research][4] — respondents have reported:
|
||||
|
||||
* Decreased network downtime
|
||||
* Increased productivity
|
||||
* Increased profitability/reduced costs
|
||||
* Improved business processes
|
||||
|
||||
|
||||
|
||||
Basically, success metrics can be bucketed into two categories: direct value propositions and cost reductions. In the former, companies are seeking results that measure revenue generation — such as improved digital experiences with customers and users. In the latter, metrics that prove the value of digitizing processes — like speed, quality, and efficacy — will demonstrate success with edge deployments.
|
||||
|
||||
**Goalposts for success with edge**
|
||||
|
||||
Edge computing deployments are underway. But before diving in, understand what’s driving these projects.
|
||||
|
||||
According to the Futurum Research, 29% of respondents are currently investing in edge infrastructure, and 62% expect to adopt within the year. For these companies, the driving force has been the business, which expects operational efficiencies from these investments. Beyond that, there’s an expectation down the road to better align with IoT strategy.
|
||||
|
||||
That being the case, it’s worth considering your business case before diving into edge. Ask: Are you seeking innovation and revenue generation amid digital transformation efforts? Or is your company looking for a low-risk, “test the waters” type of investment in edge?
|
||||
|
||||
Next up, what type of edge project makes sense for your environment? Edge data centers fall into three categories: local devices for a specific purpose (e.g., an appliance for security systems or a gateway for cloud-to-premises storage); small local data centers (typically one to 10 racks for storage and processing); and regional data centers (10+ racks for large office spaces).
|
||||
|
||||
Then, think about these best practices before talking with vendors:
|
||||
|
||||
* Management: Especially for unmanned edge data centers, remote management is critical. You’ll need predictive alerts and a service contract that enables IT to be just as resilient as a regular data center.
|
||||
* Security:Much of today’s conversation has been around data security. That starts with physical protection. Too many data breaches — including theft and employee error — are caused by physical access to IT assets.
|
||||
* Standardization: There is no need to reinvent the wheel when it comes to edge data center deployments. Using standard components makes it easier for the internal IT team to deploy, manage, and maintain.
|
||||
|
||||
|
||||
|
||||
Finally, consider the ecosystem. The end-to-end nature of edge requires not just technology integration, but also that all third parties work well together. A good ecosystem connects customers, partners, and vendors.
|
||||
|
||||
Get further information to kickstart your edge computing environment at [APC.com][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-912928582-100795093-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
|
||||
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -1,92 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Yet another killer cloud quarter puts pressure on data centers)
|
||||
[#]: via: (https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Yet another killer cloud quarter puts pressure on data centers
|
||||
======
|
||||
Continued strong growth from Amazon Web Services, Microsoft Azure, and Google Cloud Platform signals even more enterprises are moving to the cloud.
|
||||
![Getty Images][1]
|
||||
|
||||
You’d almost think I’d get tired of [writing this story over and over and over][2]… but the ongoing growth of cloud computing is too big a trend to ignore.
|
||||
|
||||
Critically, the impressive growth numbers of the three leading cloud infrastructure providers—Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform—doesn’t occur in a vacuum. It’s not just about new workloads being run in the cloud; it’s also about more and more enterprises moving existing workloads to the cloud from on-premises data centers.
|
||||
|
||||
**[ Also read:[Is the cloud already killing the enterprise data center?][3] ]**
|
||||
|
||||
To put these trends in perspective, let’s take a look at the results for all three vendors.
|
||||
|
||||
### AWS keeps on trucking
|
||||
|
||||
AWS remains by far the dominant player in the cloud infrastructure market, with a massive [$7.7 billion in quarterly sales][4] (an annual run rate of a whopping $30.8 billion). Even more remarkable, somehow AWS continues to grow revenue by almost 42% year over year. Oh, and that kind of growth is not just unique _this_ quarter; the unit has topped 40% revenue growth _every_ quarter since the beginning of 2017. (To be fair, the first quarter of 2018 saw an amazing 49% revenue growth.)
|
||||
|
||||
And unlike many fast-growing tech companies, that incredible expansion isn’t being fueled by equally impressive losses. AWS earned a healthy $2.2 billion operating profit in the quarter, up 59% from the same period last year. One reason? [The company told analysts][5] it made big data center investments in 2016 and 2017, so it hasn’t had to do so more recently (it expects to boost spending on data centers later this year). The company [reportedly][6] described AWS revenue growth as “lumpy,” but it seems to me that the numbers merely vary between huge and even bigger.
|
||||
|
||||
### Microsoft Azure grows even faster than AWS
|
||||
|
||||
Sure, 41% growth is good, but [Microsoft’s quarterly Azure revenue][7] almost doubled that, jumping 73% year over year (fairly consistent with the previous—also stellar—quarter), helping the company exceed estimates for both sales and revenue and sparking a brief shining moment of a $1 billion valuation for the company. Microsoft doesn’t break out Azure’s sales and revenue, but [the commercial cloud business, which includes Azure as well as other cloud businesses, grew 41% in the quarter to $9.6 billion][8].
|
||||
|
||||
It’s impossible to tell exactly how big Azure is, but it appears to be growing faster than AWS, though off a much smaller base. While some analysts reportedly say Azure is growing faster than AWS was at a similar stage in its development, that may not bear much significance because the addressable cloud market is now far larger than it used be.
|
||||
|
||||
According to [the New York Times][9], like AWS, Microsoft is also now reaping the benefits of heavy investments in new data centers around the world. And the Times credits Microsoft with “particular success” in [hybrid cloud installations][10], helping ease concerns among some slow-to-change enterprise about full-scale cloud adoption.
|
||||
|
||||
**[ Also read:[Why hybrid cloud will turn out to be a transition strategy][11] ]**
|
||||
|
||||
### Can Google Cloud Platform keep up?
|
||||
|
||||
Even as the [overall quarterly numbers for Alphabet][12]—Google’s parent company—didn’t meet analysts’ revenue expectations (which sent the stock tumbling), Google Cloud Platform seems to have continued its strong growth. Alphabet doesn’t break out its cloud unit, but sales in Alphabet’s “Other Revenue” category—which includes cloud computing along with hardware—jumped 25% compared to the same period last year, hitting $5.4 billion.
|
||||
|
||||
More telling, perhaps, Alphabet Chief Financial Officer Ruth Porat [reportedly][13] told analysts that "Google Cloud Platform remains one of the fastest growing businesses in Alphabet." [Porat also mentioned][14] that hiring in the cloud unit was so aggressive that it drove a 20% jump in Alphabet’s operating expenses!
|
||||
|
||||
### Companies keep going cloud
|
||||
|
||||
But the raw numbers tell only part of the story. All that growth means existing customers are spending more, but also that ever-increasing numbers of enterprises are abandoning their hassle and expense of running their data centers in favor of buying what they need from the cloud.
|
||||
|
||||
**[ Also read:[Large enterprises abandon data centers for the cloud][15] ]**
|
||||
|
||||
The New York Times quotes Amy Hood, Microsoft’s chief financial officer, explaining that, “You don’t really get revenue growth unless you have a usage growth, so this is customers deploying and using Azure.” And the Times notes that Microsoft has signed big deals with companies such as [Walgreens Boots Alliance][16] that combined Azure with other Microsoft cloud-based services.
|
||||
|
||||
This growth is true in existing markets, and also includes new markets. For example, AWS just opened new regions in [Indonesia][17] and [Hong Kong][18].
|
||||
|
||||
**[ Now read:[After virtualization and cloud, what's left on premises?][19] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_comput_connect_blue-100787048-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3292935/cloud-computing-just-had-another-kick-ass-quarter.html
|
||||
[3]: https://www.networkworld.com/article/3268384/is-the-cloud-already-killing-the-enterprise-data-center.html
|
||||
[4]: https://www.latimes.com/business/la-fi-amazon-earnings-cloud-computing-aws-20190425-story.html
|
||||
[5]: https://www.businessinsider.com/amazon-q1-2019-earnings-aws-advertising-retail-prime-2019-4
|
||||
[6]: https://www.computerweekly.com/news/252462310/Amazon-cautions-against-reading-too-much-into-slowdown-in-AWS-revenue-growth-rate
|
||||
[7]: https://www.microsoft.com/en-us/Investor/earnings/FY-2019-Q3/press-release-webcast
|
||||
[8]: https://www.cnbc.com/2019/04/24/microsoft-q3-2019-earnings.html
|
||||
[9]: https://www.nytimes.com/2019/04/24/technology/microsoft-earnings.html
|
||||
[10]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
|
||||
[11]: https://www.networkworld.com/article/3238466/why-hybrid-cloud-will-turn-out-to-be-a-transition-strategy.html
|
||||
[12]: https://abc.xyz/investor/static/pdf/2019Q1_alphabet_earnings_release.pdf?cache=8ac2b86
|
||||
[13]: https://www.forbes.com/sites/jilliandonfro/2019/04/29/google-alphabet-q1-earnings-2019/#52f5c8c733be
|
||||
[14]: https://www.youtube.com/watch?v=31_KHdse_0Y
|
||||
[15]: https://www.networkworld.com/article/3240587/large-enterprises-abandon-data-centers-for-the-cloud.html
|
||||
[16]: https://www.walgreensbootsalliance.com
|
||||
[17]: https://www.businesswire.com/news/home/20190403005931/en/AWS-Open-New-Region-Indonesia
|
||||
[18]: https://www.apnews.com/Business%20Wire/57eaf4cb603e46e6b05b634d9751699b
|
||||
[19]: https://https//www.networkworld.com/article/3232626/virtualization/extreme-virtualization-impact-on-enterprises.html
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Revolutionary data compression technique could slash compute costs)
|
||||
[#]: via: (https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Revolutionary data compression technique could slash compute costs
|
||||
======
|
||||
A new form of data compression, called Zippads, will create faster computer programs that could drastically lower the costs of computing.
|
||||
![Kevin Stanchfield \(CC BY 2.0\)][1]
|
||||
|
||||
There’s a major problem with today’s money-saving memory compression used for storing more data in less space. The issue is that computers store and run memory in predetermined blocks, yet many modern programs function and play out in variable chunks.
|
||||
|
||||
The way it’s currently done is actually, highly inefficient. That’s because the compressed programs, which use objects rather than evenly configured slabs of data, don’t match the space used to store and run them, explain scientists working on a revolutionary new compression system called Zippads.
|
||||
|
||||
The answer, they say—and something that if it works would drastically reduce those inefficiencies, speed things up, and importantly, reduce compute costs—is to compress the varied objects and not the cache lines, as is the case now. Cache lines are fixed-size blocks of memory that are transferred to memory cache.
|
||||
|
||||
**[ Read also:[How to deal with backup when you switch to hyperconverged infrastructure][2] ]**
|
||||
|
||||
“Objects, not cache lines, are the natural unit of compression,” writes Po-An Tsai and Daniel Sanchez in their MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) [paper][3] (pdf).
|
||||
|
||||
They say object-based programs — of the kind used now everyday, such as Python — should be compressed based on their programmed object size, not on some fixed value created by traditional or even state-of-the art cached methods.
|
||||
|
||||
The alternative, too, isn’t to recklessly abandon object-oriented programming just because it’s inefficient at using compression. One must adapt compression to that now common object-using code.
|
||||
|
||||
The scientists claim their new system can increase the compression ratio 1.63 times and improve performance by 17%. It’s the “first compressed memory hierarchy designed for object-based applications,” they say.
|
||||
|
||||
### The benefits of compression
|
||||
|
||||
Compression is a favored technique for making computers more efficient. The main advantage over simply adding more memory is that costs are lowered significantly—you don’t need to add increasing physical main memory hardware because you’re cramming more data into existing.
|
||||
|
||||
However, to date, hardware memory compression has been best suited to more old-school large blocks of data, not the “random, fine-grained memory accesses,” the team explains. It’s not great at accessing small pieces of data, such as words, for example.
|
||||
|
||||
### How the Zippads compression system works
|
||||
|
||||
In Zippads, as the new system is called, stored object hierarchical levels (called “pads”) are located on-chip and are directly accessed. The different levels (pads) have changing speed grades, with newly referenced objects being placed in the fastest pad. As a pad fills up, it begins the process of evicting older, not-so-active objects and ultimately recycles the unused code that is taking up desirable fast space and isn’t being used. Cleverly, at the fast level, the code parts aren’t even compressed, but as they prove their non-usefulness they get kicked down to compressed, slow-to-access, lower-importance pads—and are brought back up as necessary.
|
||||
|
||||
Zippads would “see computers that can run much faster or can run many more apps at the same speeds,” an[ MIT News][4] article says. “Each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.” Bandwidth is freed up, in other words.
|
||||
|
||||
“All computer systems would benefit from this,” Sanchez, a professor of computer science and electrical engineering, says in the article. “Programs become faster because they stop being bottlenecked by memory bandwidth.”
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/memory-100787327-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3389396/how-to-deal-with-backup-when-you-switch-to-hyperconverged-infrastructure.html
|
||||
[3]: http://people.csail.mit.edu/poantsai/papers/2019.zippads.asplos.pdf
|
||||
[4]: http://news.mit.edu/2019/hardware-data-compression-0416
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SD-WAN is Critical for IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3393445/sd-wan-is-critical-for-iot.html#tk.rss_all)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
SD-WAN is Critical for IoT
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
The Internet of Things (IoT) is everywhere and its use is growing fast. IoT is used by local governments to build smart cities. It’s used to build smart businesses. And, consumers are benefitting as it’s built into smart homes and smart cars. Industry analyst first estimates that over 20 billion IoT devices will be connected by 2020. That’s a 2.5x increase from the more than 8 billion connected devices in 2017*.
|
||||
|
||||
Manufacturing companies have the highest IoT spend to date of industries while the health care market is experiencing the highest IoT growth. By 2020, 50 percent of IoT spending will be driven by manufacturing, transportation and logistics, and utilities.
|
||||
|
||||
IoT growth is being fueled by the promise of analytical data insights that will ultimately yield greater efficiencies and enhanced customer satisfaction. The top use cases driving IoT growth are self-optimizing production, predictive maintenance and automated inventory management.
|
||||
|
||||
From a high-level view, the IoT architecture includes sensors that collect and transmit data (i.e. temperature, speed, humidity, video feed, pressure, IR, proximity, etc.) from “things” like cars, trucks, machines, etc. that are connected over the internet. Data collected is then analyzed, translating raw data into actionable information. Businesses can then act on this information. And at more advanced levels, machine learning and AI algorithms learn and adapt to this information and automatically respond at a system level.
|
||||
|
||||
IDC estimates that by 2025, over 75 billion IoT devices* will be connected. By that time, nearly a quarter of the world’s projected 163 zettabytes* (163 trillion gigabytes) of data will have been created in real-time, and the vast majority of that data will have been created by IoT devices. This massive amount of data will drive an exponential increase in traffic on the network infrastructure requiring massive scalability. Also, this increasing amount of data will require tremendous processing power to mine it and transform it into actionable intelligence. In parallel, security risks will continue to increase as there will be many more potential entry points onto the network. Lastly, management of the overall infrastructure will require better orchestration of policies as well as the means to streamline on-going operations.
|
||||
|
||||
### **How does SD-WAN enable IoT business initiatives?**
|
||||
|
||||
There are three key elements that an [SD-WAN][2] platform must include:
|
||||
|
||||
1. **Visibility** : Real-time visibility into the network is key. It takes the guesswork out of rapid problem resolution, enabling organizations to run more efficiently by accelerating troubleshooting and applying preventive measures. Furthermore, a CIO is able to pull metrics and see bandwidth consumed by any IoT application.
|
||||
2. **Security** : IoT traffic must be isolated from other application traffic. IT must prevent – or at least reduce – the possible attack surface that may be exposed to IoT device traffic. Also, the network must continue delivering other application traffic in the event of a melt down on a WAN link caused by a DDoS attack.
|
||||
3. **Agility** : With the increased number of connected devices, applications and users, a comprehensive, intelligent and centralized orchestration approach that continuously adapts to deliver the best experience to the business and users is critical to success.
|
||||
|
||||
|
||||
|
||||
### Key Silver Peak EdgeConnect SD-WAN capabilities for IoT
|
||||
|
||||
1\. Silver Peak has an [embedded real-time visibility engine][3] allowing IT to gain complete observability into the performance attributes of the network and applications in real-time. The [EdgeConnect][4] SD-WAN appliances deployed in branch offices send information to the centralized [Unity Orchestrator™][5]. Orchestrator collects the data and presents it in a comprehensive management dashboard via customizable widgets. These widgets provide a wealth of operational data including a health heatmap for every SD-WAN appliance deployed, flow counts, active tunnels, logical topologies, top talkers, alarms, bandwidth consumed by each application and location, latency and jitter and much more. Furthermore, the platform maintains weeks’ worth of data with context allowing IT to playback and see what has transpired at a specific time and location, similar to a DVR.
|
||||
|
||||
![Click to read Solution Brief][6]
|
||||
|
||||
2\. The second set of key capabilities center around security and end-to-end zone-based segmentation. An IoT traffic zone may be created on the LAN or branch side. IoT traffic is then mapped all the way across the WAN to the data center or cloud where the data will be processed. Zone-based segmentation is accomplished in a simplified and automated way within the Orchestrator GUI. In cases where further traffic inspection is required, IT can simply service chain to another security service. There are several key benefits realized by this approach. IT can easily and quickly apply segmentation policies; segmentation mitigates the attack surface; and IT can save on additional security investments.
|
||||
|
||||
![***Click to read Solution Brief ***][7]
|
||||
|
||||
3\. EdgeConnect employs machine learning at the global level where with internet sensors and third-party sensors feed into the cloud portal software. The software tracks the geolocation of all IP addresses and IP reputation, distributing signals down to the Unity Orchestrator running in each individual customer’s enterprise. In turn, it is speaking to the edge devices sitting in the branch offices. There, distributed learning is done by looking at the first packet, making an inference based on the first packet what the application is. So, if seeing that 100 times now, every time packets come from that particular IP address and turns out to be an IoT, we can make an inference that IP belongs to IoT application. In parallel, we’re using a mix of traditional techniques to validate the identification of the application. All this combined other multi-level intelligence enables simple and automated policy orchestration across a large number of devices and applications.
|
||||
|
||||
![***Click to read Solution Brief ***][8]
|
||||
|
||||
SD-WAN plays a foundational role as businesses continue to embrace IoT, but choosing the right SD-WAN platform is even more critical to ensuring businesses are ultimately able to fully optimize their operations.
|
||||
|
||||
* Source: [IDC][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393445/sd-wan-is-critical-for-iot.html#tk.rss_all
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/istock-1019172426-100795551-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.silver-peak.com/resource-center/simplify-sd-wan-operations-greater-visibility
|
||||
[4]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[5]: https://www.silver-peak.com/products/unity-orchestrator
|
||||
[6]: https://images.idgesg.net/images/article/2019/05/1_simplify-100795554-large.jpg
|
||||
[7]: https://images.idgesg.net/images/article/2019/05/2_centralize-100795555-large.jpg
|
||||
[8]: https://images.idgesg.net/images/article/2019/05/3_increase-100795558-large.jpg
|
||||
[9]: https://www.information-age.com/data-forecast-grow-10-fold-2025-123465538/
|
@ -1,66 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Some IT pros say they have too much data)
|
||||
[#]: via: (https://www.networkworld.com/article/3393205/some-it-pros-say-they-have-too-much-data.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Some IT pros say they have too much data
|
||||
======
|
||||
IT professionals have too many data sources to count, and they spend a huge amount of time wrestling that data into usable condition, a survey from Ivanti finds.
|
||||
![Getty Images][1]
|
||||
|
||||
A new survey has found that a growing number of IT professionals have too many data sources to even count, and they are spending more and more time just wrestling that data into usable condition.
|
||||
|
||||
Ivanti, an IT asset management firm, [surveyed 400 IT professionals on their data situation][2] and found IT faces numerous challenges when it comes to siloes, data, and implementation. The key takeaway is data overload is starting to overwhelm IT managers and data lakes are turning into data oceans.
|
||||
|
||||
**[ Read also:[Understanding mass data fragmentation][3] | Get daily insights [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Among the findings from Ivanti's survey:
|
||||
|
||||
* Fifteen percent of IT professionals say they have too many data sources to count, and 37% of professionals said they have about 11-25 different sources for data.
|
||||
* More than half of IT professionals (51%) report they have to work with their data for days, weeks or more before it's actionable.
|
||||
* Only 10% of respondents said the data they receive is actionable within minutes.
|
||||
* One in three respondents said they have the resources to act on their data, but more than half (52%) said they only sometimes have the resources.
|
||||
|
||||
|
||||
|
||||
“It’s clear from the results of this survey that IT professionals are in need of a more unified approach when working across organizational departments and resulting silos,” said Duane Newman, vice president of product management at Ivanti, in a statement.
|
||||
|
||||
### The problem with siloed data
|
||||
|
||||
The survey found siloed data represents a number of problems and challenges. Three key priorities suffer the most: automation (46%), user productivity and troubleshooting (42%), and customer experience (41%). The survey also found onboarding/offboarding suffers the least (20%) due to siloes, so apparently HR and IT are getting things right.
|
||||
|
||||
In terms of what they want from real-time insight, about 70% of IT professionals said their security status was the top priority over other issues. Respondents were least interested in real-time insights around warranty data.
|
||||
|
||||
### Data lake method a recipe for disaster
|
||||
|
||||
I’ve been immersed in this subject for other publications for some time now. Too many companies are hoovering up data for the sake of collecting it with little clue as to what they will do with it later. One thing you have to say about data warehouses, the schema on write at least forces you to think about what you are collecting and how you might use it because you have to store it away in a usable form.
|
||||
|
||||
The new data lake method is schema on read, meaning you filter/clean it when you read it into an application, and that’s just a recipe for disaster. If you are looking at data collected a month or a year ago, do you even know what it all is? Now you have to apply schema to data and may not even remember collecting it.
|
||||
|
||||
Too many people think more data is good when it isn’t. You just drown in it. When you reach a point of having too many data sources to count, you’ve gone too far and are not going to get insight. You’re going to get overwhelmed. Collect data you know you can use. Otherwise you are wasting petabytes of disk space.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393205/some-it-pros-say-they-have-too-much-data.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/03/database_data-center_futuristic-technology-100752012-large.jpg
|
||||
[2]: https://www.ivanti.com/blog/survey-it-professionals-data-sources
|
||||
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,83 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (When it comes to uptime, not all cloud providers are created equal)
|
||||
[#]: via: (https://www.networkworld.com/article/3394341/when-it-comes-to-uptime-not-all-cloud-providers-are-created-equal.html#tk.rss_all)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
When it comes to uptime, not all cloud providers are created equal
|
||||
======
|
||||
Cloud uptime is critical today, but vendor-provided data can be confusing. Here's an analysis of how AWS, Google Cloud and Microsoft Azure compare.
|
||||
![Getty Images][1]
|
||||
|
||||
The cloud is not just important; it's mission-critical for many companies. More and more IT and business leaders I talk to look at public cloud as a core component of their digital transformation strategies — using it as part of their hybrid cloud or public cloud implementation.
|
||||
|
||||
That raises the bar on cloud reliability, as a cloud outage means important services are not available to the business. If this is a business-critical service, the company may not be able to operate while that key service is offline.
|
||||
|
||||
Because of the growing importance of the cloud, it’s critical that buyers have visibility into the reliability number for the cloud providers. The challenge is the cloud providers don't disclose the disruptions in a consistent manner. In fact, some are confusing to the point where it’s difficult to glean any kind of meaningful conclusion.
|
||||
|
||||
**[ RELATED:[What IT pros need to know about Azure Stack][2] and [Which cloud performs better, AWS, Azure or Google?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
### Reported cloud outage times don't always reflect actual downtime
|
||||
|
||||
Microsoft Azure and Google Cloud Platform (GCP) both typically provide information on date and time, but only high-level data on the services affected and sparse information on regional impact. The problem with that is it’s difficult to get a sense of overall reliability. For instance, if Azure reports a one-hour outage that impacts five services in three regions, the website might show just a single hour. In actuality, that’s 15 hours of total downtime.
|
||||
|
||||
Between Azure, GCP and Amazon Web Services (AWS), [Azure is the most obscure][5], as it provides the least amount of detail. [GCP does a better job of providing detail][6] at the service level but tends to be obscure with regional information. Sometimes it’s very clear as to what services are unavailable, and other times it’s not.
|
||||
|
||||
[AWS has the most granular reporting][7], as it shows every service in every region. If an incident occurs that impacts three services, all three of those services would light up red. If those were unavailable for one hour, AWS would record three hours of downtime.
|
||||
|
||||
Another inconsistency between the cloud providers is the amount of historical downtime data that is available. At one time, all three of the cloud vendors provided a one-year view into outages. GCP and AWS still do this, but Azure moved to only a [90-day view][5] sometime over the past year.
|
||||
|
||||
### Azure has significantly higher downtime than GCP and AWS
|
||||
|
||||
The next obvious question is who has the most downtime? To answer that, I worked with a third-party firm that has continually collected downtime information directly from the vendor websites. I have personally reviewed the information and can validate its accuracy. Based on the vendors own reported numbers, from the beginning of 2018 through May 3, 2019, AWS leads the pack with only 338 hours of downtime, followed by GCP closely at 361. Microsoft Azure has a whopping total of 1,934 hours of self-reported downtime.
|
||||
|
||||
![][8]
|
||||
|
||||
A few points on these numbers. First, this is an aggregation of the self-reported data from the vendors' websites, which isn’t the “true” number, as regional information or service granularity is sometimes obscured. If a service is unavailable for an hour and it’s reported for an hour on the website but it spanned five regions, correctly five hours should have been used. But for this calculation, we used only one hour because that is what was self-reported.
|
||||
|
||||
Because of this, the numbers are most favorable to Microsoft because they provide the least amount of regional information. The numbers are least favorable to AWS because they provide the most granularity. Also, I believe AWS has the most services in most regions, so they have more opportunities for an outage.
|
||||
|
||||
We had considered normalizing the data, but that would require a significant amount of work to destruct the downtime on a per service per region basis. I may choose to do that in the future, but for now, the vendor-reported view is a good indicator of relative performance.
|
||||
|
||||
Another important point is that only infrastructure as a service (IaaS) services were used to calculate downtime. If Google Street View or Bing Maps went down, most businesses would not care, so it would have been unfair to roll those number in.
|
||||
|
||||
### SLAs do not correlate to reliability
|
||||
|
||||
Given the importance of cloud services today, I would like to see every cloud provider post a 12-month running total of downtime somewhere on their website so customers can do an “apples to apples” comparison. This obviously isn’t the only factor used in determining which cloud provider to use, but it is one of the more critical ones.
|
||||
|
||||
Also, buyers should be aware that there is a big difference between service-level agreements (SLAs) and downtime. A cloud operator can promise anything they want, even provide a 100% SLA, but that just means they need to reimburse the business when a service isn’t available. Most IT leaders I have talked to say the few bucks they get back when a service is out is a mere fraction of what the outage actually cost them.
|
||||
|
||||
### Measure twice and cut once to minimize business disruption
|
||||
|
||||
If you’re reading this and you’re researching cloud services, it’s important to not just make the easy decision of buying for convenience. Many companies look at Azure because Microsoft gives away Azure credits as part of the Enterprise Agreement (EA). I’ve interviewed several companies that took the path of least resistance, but they wound up disappointed with availability and then switched to AWS or GCP later, which can have a disruptive effect.
|
||||
|
||||
I’m certainly not saying to not buy Microsoft Azure, but it is important to do your homework to understand the historical performance of the services you’re considering in the regions you need them. The information on the vendor websites may not tell the full picture, so it’s important to do the necessary due diligence to ensure you understand what you’re buying before you buy it.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394341/when-it-comes-to-uptime-not-all-cloud-providers-are-created-equal.html#tk.rss_all
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_comput_connect_blue-100787048-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3208029/azure-stack-microsoft-s-private-cloud-platform-and-what-it-pros-need-to-know-about-it
|
||||
[3]: https://www.networkworld.com/article/3319776/the-network-matters-for-public-cloud-performance.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://azure.microsoft.com/en-us/status/history/
|
||||
[6]: https://status.cloud.google.com/incident/appengine/19008
|
||||
[7]: https://status.aws.amazon.com/
|
||||
[8]: https://images.idgesg.net/images/article/2019/05/public-cloud-downtime-100795948-large.jpg
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,53 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top auto makers rely on cloud providers for IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Top auto makers rely on cloud providers for IoT
|
||||
======
|
||||
|
||||
For the companies looking to implement the biggest and most complex [IoT][1] setups in the world, the idea of pairing up with [AWS][2], [Google Cloud][3] or [Azure][4] seems to be one whose time has come. Within the last two months, BMW and Volkswagen have both announced large-scale deals with Microsoft and Amazon, respectively, to help operate their extensive network of operational technology.
|
||||
|
||||
According to Alfonso Velosa, vice president and analyst at Gartner, part of the impetus behind those two deals is that the automotive sector fits in very well with the architecture of the public cloud. Public clouds are great at collecting and processing data from a diverse array of different sources, whether they’re in-vehicle sensors, dealerships, mechanics, production lines or anything else.
|
||||
|
||||
**[ RELATED:[What hybrid cloud means in practice][5]. | Get regularly scheduled insights by [signing up for Network World newsletters][6]. ]**
|
||||
|
||||
“What they’re trying to do is create a broader ecosystem. They think they can leverage the capabilities from these folks,” Velosa said.
|
||||
|
||||
### Cloud providers as IoT partners
|
||||
|
||||
The idea is automated analytics for service and reliability data, manufacturing and a host of other operational functions. And while the full realization of that type of service is still very much a work in progress, it has clear-cut advantages for big companies – a skilled partner handling tricky implementation work, built-in capability for sophisticated analytics and security, and, of course, the ability to scale up in a big way.
|
||||
|
||||
Hence, the structure of the biggest public clouds has upside for many large-scale IoT deployments, not just the ones taking place in the auto industry. The cloud giants have vast infrastructures, with multiple points of presence all over the world.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][7]
|
||||
|
||||
[Learn More][8] Existing Users [Sign In][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[2]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
|
||||
[3]: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
|
||||
[4]: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html
|
||||
[5]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
|
||||
[6]: https://www.networkworld.com/newsletters/signup.html
|
||||
[7]: javascript://
|
||||
[8]: /learn-about-insider/
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality)
|
||||
[#]: via: (https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html)
|
||||
[#]: author: (Francisca Segovia )
|
||||
|
||||
Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
Without a doubt, 5G — the fifth generation of mobile wireless technology — is the hottest topic in wireless circles today. You can’t throw a stone without hitting 5G news. While telecommunications providers are in a heated competition to roll out 5G, it’s important to reflect on current 4G LTE (Long Term Evolution) business solutions as a preview of what we have learned and what’s possible.
|
||||
|
||||
This is part one of a two-part blog series that will explore the [SD-WAN][2] journey through the evolution of these wireless technologies.
|
||||
|
||||
### **Mobile SD-WAN is a reality**
|
||||
|
||||
4G LTE commercialization continues to expand. According to [the GSM (Groupe Spéciale Mobile) Association][3], 710 operators have rolled out 4G LTE in 217 countries, reaching 83 percent of the world’s population. The evolution of 4G is transforming the mobile industry and is setting the stage for the advent of 5G.
|
||||
|
||||
Mobile connectivity is increasingly integrated with SD-WAN, along with MPLS and broadband WAN services today. 4G LTE represents a very attractive transport alternative, as a backup or even an active member of the WAN transport mix to connect users to critical business applications. And in some cases, 4G LTE might be the only choice in locations where fixed lines aren’t available or reachable. Furthermore, an SD-WAN can optimize 4G LTE connectivity and bring new levels of performance and availability to mobile-based business use cases by selecting the best path available across several 4G LTE connections.
|
||||
|
||||
### **Increasing application performance and availability with 4G LTE**
|
||||
|
||||
Silver Peak has partnered with [BEC Technologies][4] to create a joint solution that enables customers to incorporate one or more low-cost 4G LTE services into any [Unity EdgeConnect™][5] SD-WAN edge platform deployment. All the capabilities of the EdgeConnect platform are supported across LTE links including packet-based link bonding, dynamic path control, path conditioning along with the optional [Unity Boost™ WAN Optimization][6] performance pack. This ensures always-consistent, always-available application performance even in the event of an outage or degraded service.
|
||||
|
||||
EdgeConnect also incorporates sophisticated NAT traversal technology that eliminates the requirement for provisioning the LTE service with extra-cost static IP addresses. The Silver Peak [Unity Orchestrator™][7] management software enables the prioritization of LTE bandwidth usage based on branch and application requirements – active-active or backup-only. This solution is ideal in retail point-of-sale and other deployment use cases where always-available WAN connectivity is critical for the business.
|
||||
|
||||
### **Automated SD-WAN enables innovative services**
|
||||
|
||||
An example of an innovative mobile SD-WAN service is [swyMed’s DOT Telemedicine Backpack][8] powered by the EdgeConnect [Ultra Small][9] hardware platform. This integrated telemedicine solution enables first responders to connect to doctors and communicate patient vital statistics and real-time video anywhere, any time, greatly improving and expediting care for emergency patients. Using a lifesaving backpack provisioned with two LTE services from different carriers, EdgeConnect continuously monitors the underlying 4G LTE services for packet loss, latency and jitter. In the case of transport failure or brownout, EdgeConnect automatically initiates a sub-second failover so that voice, video and data connections continue without interruption over the remaining active 4G service. By bonding the two LTE links together with the EdgeConnect SD-WAN, swyMed can achieve an aggregate signal quality in excess of 90 percent, bringing mobile telemedicine to areas that would have been impossible in the past due to poor signal strength.
|
||||
|
||||
To learn more about SD-WAN and the unique advantages that SD-WAN provides to enterprises across all industries, visit the [SD-WAN Explained][2] page on our website.
|
||||
|
||||
### **Prepare for the 5G future**
|
||||
|
||||
In summary, the adoption of 4G LTE is a reality. Service providers are taking advantage of the distinct benefits of SD-WAN to offer managed SD-WAN services that leverage 4G LTE.
|
||||
|
||||
As the race for the 5G gains momentum, service providers are sure to look for ways to drive new revenue streams to capitalize on their initial investments. Stay tuned for part 2 of this 2-blog series where I will discuss how SD-WAN is one of the technologies that can help service providers to transition from 4G to 5G and enable the monetization of a new wave of managed 5G services.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html
|
||||
|
||||
作者:[Francisca Segovia][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/istock-952414660-100796279-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.gsma.com/futurenetworks/resources/all-ip-statistics/
|
||||
[4]: https://www.silver-peak.com/resource-center/edgeconnect-4glte-solution-bec-technologies
|
||||
[5]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[6]: https://www.silver-peak.com/products/unity-boost
|
||||
[7]: https://www.silver-peak.com/products/unity-orchestrator
|
||||
[8]: https://www.silver-peak.com/resource-center/mobile-telemedicine-helps-save-lives-streaming-real-time-clinical-data-and-patient
|
||||
[9]: https://www.silver-peak.com/resource-center/edgeconnect-us-ec-us-specification-sheet
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Extreme addresses networked-IoT security)
|
||||
[#]: via: (https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Extreme addresses networked-IoT security
|
||||
======
|
||||
The ExtremeAI security app features machine learning that can understand typical behavior of IoT devices and alert when it finds anomalies.
|
||||
![Getty Images][1]
|
||||
|
||||
[Extreme Networks][2] has taken the wraps off a new security application it says will use machine learning and artificial intelligence to help customers effectively monitor, detect and automatically remediate security issues with networked IoT devices.
|
||||
|
||||
The application – ExtremeAI security—features machine-learning technology that can understand typical behavior of IoT devices and automatically trigger alerts when endpoints act in unusual or unexpected ways, Extreme said.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][3]
|
||||
* [Edge computing best practices][4]
|
||||
* [How edge computing can help secure the IoT][5]
|
||||
|
||||
|
||||
|
||||
Extreme said that the ExtremeAI Security application can tie into all leading threat intelligence feeds, and had close integration with its existing [Extreme Workflow Composer][6] to enable automatic threat mitigation and remediation.
|
||||
|
||||
The application integrates the company’s ExtremeAnalytics application which lets customers view threats by severity, category, high-risk endpoints and geography. An automated ticketing feature integrates with variety of popular IT tools such as Slack, Jira, and ServiceNow, and the application interoperates with many popular security tools, including existing network taps, the vendor stated.
|
||||
|
||||
There has been an explosion of new endpoints ranging from million-dollar smart MRI machines to five-dollar sensors, which creates a complex and difficult job for network and security administrators, said Abby Strong, vice president of product marketing for Extreme. “We need smarter, secure and more self-healing networks especially where IT cybersecurity resources are stretched to the limit.”
|
||||
|
||||
Extreme is trying to address an issue that is important to enterprise-networking customers: how to get actionable, usable insights as close to real-time as possible, said Rohit Mehra, Vice President of Network Infrastructure at IDC. “Extreme is melding automation, analytics and security that can look at network traffic patterns and allow the system to take action when needed.”
|
||||
|
||||
The ExtremeAI application, which will be available in October, is but one layer of IoT security Extreme offers. Already on the market, its [Defender for IoT][7] package, which includes a Defender application and adapter, lets customers monitor, set policies and isolate IoT devices across an enterprise.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
The Extreme AI and Defender packages are now part of what the company calls Extreme Elements, which is a menu of its new and existing Smart OmniEdge, Automated Campus and Agile Data Center software, hardware and services that customers can order to build a manageable, secure system.
|
||||
|
||||
Aside from the applications, the Elements include Extreme Management Center, the company’s network management software; the company’s x86-based intelligent appliances, including the ExtremeCloud Appliance; and [ExtremeSwitching X465 premium][9], a stackable multi-rate gigabit Ethernet switch.
|
||||
|
||||
The switch and applications are just the beginning of a very busy time for Extreme. In its [3Q earnings cal][10]l this month company CEO Ed Meyercord noted Extreme was in the “early stages of refreshing 70 percent of our products” and seven different products will become generally available this quarter – a record for Extreme, he said.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_security_tablet_conference_digital-100787102-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3289508/extreme-facing-challenges-girds-for-future-networking-battles.html
|
||||
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[6]: https://www.extremenetworks.com/product/workflow-composer/
|
||||
[7]: https://www.extremenetworks.com/product/extreme-defender-for-iot/
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://community.extremenetworks.com/extremeswitching-exos-223284/extremexos-30-2-and-smart-omniedge-premium-x465-switches-are-now-available-7823377
|
||||
[10]: https://seekingalpha.com/news/3457137-extreme-networks-minus-15-percent-quarterly-miss-light-guidance
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,88 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Will 5G be the first carbon-neutral network?)
|
||||
[#]: via: (https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Will 5G be the first carbon-neutral network?
|
||||
======
|
||||
Increased energy consumption in new wireless networks could become ecologically unsustainable. Engineers think they have solutions that apply to 5G, but all is not certain.
|
||||
![Dushesina/Getty Images][1]
|
||||
|
||||
If wireless networks transfer 1,000 times more data, does that mean they will use 1,000 times more energy? It probably would with the old 4G LTE wireless technologies— LTE doesn’t have much of a sleep-standby. But with 5G, we might have a more energy-efficient option.
|
||||
|
||||
More customers want Earth-friendly options, and engineers are now working on how to achieve it — meaning 5G might introduce the first zero-carbon networks. It’s not all certain, though.
|
||||
|
||||
**[ Related:[What is 5G wireless? And how it will change networking as we know it][2] ]**
|
||||
|
||||
“When the 4G technology for wireless communication was developed, not many people thought about how much energy is consumed in transmitting bits of information,” says Emil Björnson, associate professor of communication systems at Linkoping University, [in an article on the school’s website][3].
|
||||
|
||||
Standby was never built into 4G, Björnson explains. Reasons include overbuilding — the architects wanted to ensure connections didn’t fail, so they just kept the power up. The downside to that redundancy was that almost the same amount of energy is used whether the system is transmitting data or not.
|
||||
|
||||
“We now know that this is not necessary,” Björnson says. 5G networks don’t use much power during periods of low traffic, and that reduces power consumption.
|
||||
|
||||
Björnson says he knows how to make future-networks — those 5G networks that one day may become the enterprise broadband replacement — super efficient even when there is heavy use. Massive-MIMO (multiple-in, multiple-out) antennas are the answer, he says. That’s hundreds of connected antennas taking advantage of multipath.
|
||||
|
||||
I’ve written before about some of Björnson's Massive-MIMO ideas. He thinks [Massive-MIMO will remove all capacity ceilings from wireless networks][4]. However, he now adds calculations to his research that he claims prove that the Massive-MIMO antenna technology will also reduce power use. He and his group are actively promoting their academic theories in a paper ([pdf][5]).
|
||||
|
||||
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
|
||||
|
||||
### Nokia's plan to reduce wireless networks' CO2 emissions
|
||||
|
||||
Björnson's isn’t the only 5G-aimed eco-concept out there. Nokia points out that it isn't just radios transmitting that use electricity. Cooling is actually the main electricity hog, says the telcommunications company, which is one of the world’s principal manufacturers of mobile network equipment.
|
||||
|
||||
Nokia says the global energy cost of Radio Access Networks (RANs) in 2016 (the last year numbers were available), which includes base transceiver stations (BTSs) needed by mobile networks, was around $80 billion. That figure increases with more users coming on stream, something that’s probable. Of the BTS’s electricity use, about 90% “converts to waste heat,” [Harry Kuosa, a marketing executive, writes on Nokia’s blog][7]. And base station sites account for about 80% of a mobile network’s entire energy use, Nokia expands on its website.
|
||||
|
||||
“A thousand-times more traffic that creates a thousand-times higher energy costs is unsustainable,” Nokia says in its [ebook][8] on the subject, “Turning the zero carbon vision into business opportunity,” and it’s why Nokia plans liquid-cooled 5G base stations among other things, including chip improvements. It says the liquid-cooling can reduce CO2 emissions by up to 80%.
|
||||
|
||||
### Will those ideas work?
|
||||
|
||||
Not all agree power consumption can be reduced when implementing 5G, though. Gabriel Brown of Heavy Reading, quotes [in a tweet][9] a China Mobile executive as saying that 5G BTSs will use three times as much power as 4G LTE ones because the higher frequencies used in 5G mean one needs more BTS units to provide the same geographic coverage: For physics reasons, higher frequencies equals shorter range.
|
||||
|
||||
If, as is projected, 5G develops into the new enterprise broadband for the internet of things (IoT), along with associated private networks covering everything else, then these eco- and cost-important questions are going to be salient — and they need answers quickly. 5G will soon be here, and [Gartner estimates that 60% of organizations will adopt it][10].
|
||||
|
||||
**More about 5G networks:**
|
||||
|
||||
* [How enterprises can prep for 5G networks][11]
|
||||
* [5G vs 4G: How speed, latency and apps support differ][12]
|
||||
* [Private 5G networks are coming][13]
|
||||
* [5G and 6G wireless have security issues][14]
|
||||
* [How millimeter-wave wireless could help support 5G and IoT][15]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/01/4g-versus-5g_horizon_sunrise-100784230-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
|
||||
[3]: https://liu.se/en/news-item/okningen-av-mobildata-kraver-energieffektivare-nat
|
||||
[4]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html
|
||||
[5]: https://arxiv.org/pdf/1812.01688.pdf
|
||||
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[7]: https://www.nokia.com/blog/nokia-has-ambitious-plans-reduce-network-power-consumption/
|
||||
[8]: https://pages.nokia.com/2364.Zero.Emissions.ebook.html?did=d000000001af&utm_campaign=5g_in_action_&utm_source=twitter&utm_medium=organic&utm_term=0dbf430c-1c94-47d7-8961-edc4f0ba3270
|
||||
[9]: https://twitter.com/Gabeuk/status/1099709788676636672?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1099709788676636672&ref_url=https%3A%2F%2Fwww.lightreading.com%2Fmobile%2F5g%2Fpower-consumption-5g-basestations-are-hungry-hungry-hippos%2Fd%2Fd-id%2F749979
|
||||
[10]: https://www.gartner.com/en/newsroom/press-releases/2018-12-18-gartner-survey-reveals-two-thirds-of-organizations-in
|
||||
[11]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
|
||||
[12]: https://www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html
|
||||
[13]: https://www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html
|
||||
[14]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html
|
||||
[15]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
@ -1,140 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The modern data center and the rise in open-source IP routing suites)
|
||||
[#]: via: (https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
The modern data center and the rise in open-source IP routing suites
|
||||
======
|
||||
Open source enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything.
|
||||
![fdecomite \(CC BY 2.0\)][1]
|
||||
|
||||
As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.
|
||||
|
||||
Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.
|
||||
|
||||
### Disaggregation
|
||||
|
||||
The concept of network disaggregation involves breaking-up of the vertical networking landscape into individual pieces, where each piece can be used in the best way possible. The hardware can be separated from the software, along with open or closed IP routing suites. This enables the network operators to use the best of breed for the hardware, software and the applications.
|
||||
|
||||
**[ Now see[7 free network tools you must have][2]. ]**
|
||||
|
||||
Networking has always been built as an appliance and not as a platform. The mindset is that the network vendor builds an appliance and as a specialized appliance, they will completely control what you can and cannot do on that box. In plain words, they will not enable anything that is not theirs. As a result, they act as gatekeepers and not gate-enablers.
|
||||
|
||||
Network disaggregation empowers the network operators with the ability to lay hands on the features they need when they need them. However, this is impossible in case of non-disaggregated hardware.
|
||||
|
||||
### Disaggregation leads to using best-of-breed
|
||||
|
||||
In the traditional vertically integrated networking market, you’re forced to live with the software because you like the hardware, or vice-versa. But network disaggregation drives different people to develop things that matter to them. This allows multiple groups of people to connect, with each one focused on doing what he or she does the best. Switching silicon manufacturers can provide the best merchant silicon. Routing suites can be provided by those who are the best at that. And the OS vendors can provide the glue that enables all of these to work well together.
|
||||
|
||||
With disaggregation, people are driven to do what they are good at. One company does the hardware, whereas another does the software and other company does the IP routing suites. Hence, today the networking world looks like more of the server world.
|
||||
|
||||
### Open source
|
||||
|
||||
Within this rise of the modern data center, there is another element that is driving network disaggregation; the notion of open source. Open source is “denoting software for which the original source code is made freely available, it may be redistributed and modified.” It enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything.
|
||||
|
||||
As a matter of fact, the networking world has always been very vendor driven. However, the advent of open source gives the opportunity to like-minded people rather than the vendor controlling the features. This eliminates the element of vendor lock-in, thereby enabling interesting work. Open source allows more than one company to be involved.
|
||||
|
||||
### Open source in the data center
|
||||
|
||||
The traditional enterprise and data center networks were primarily designed by bridging and Spanning Tree Protocol (STP). However, the modern data center is driven by IP routing and the CLOS topology. As a result, you need a strong IP routing suite.
|
||||
|
||||
That was the point where the need for an open-source routing suite surfaced, the suite that can help drive the modern data center. The primary open-source routing suites are [FRRouting (FRR)][3], BIRD, GoBGP and ExaBGP.
|
||||
|
||||
Open-source IP routing protocol suites are slowly but steadily gaining acceptance and are used in data centers of various sizes. Why? It is because they allow a community of developers and users to work on finding solutions to common problems. Open-source IP routing protocol suites equip them to develop the specific features that they need. It also helps the network operators to create simple designs that make sense to them, as opposed to having everything controlled by the vendor. They also enable routing suites to run on compute nodes. Kubernetes among others uses this model of running a routing protocol on a compute node.
|
||||
|
||||
Today many startups are using FRR. Out of all of the IP routing suites, FRR is preferred in the data center as the primary open-source IP routing protocol suite. Some traditional network vendors have even demonstrated the use of FRR on their networking gear.
|
||||
|
||||
There are lots of new features currently being developed for FRR, not just by the developers but also by the network operators.
|
||||
|
||||
### Use cases for open-source routing suites
|
||||
|
||||
When it comes to use-cases, where do IP routing protocol suites sit? First and foremost, if you want to do any type of routing in the disaggregated network world, you need an IP routing suite.
|
||||
|
||||
Some operators are using FRR at the edge of the network as well, thereby receiving full BGP feeds. Many solutions which use Intel’s DPDK for packet forwarding use FRR as the control plane, receiving full BGP feeds. In addition, there are other vendors using FRR as the core IP routing suite for a full leaf and spine data center architecture. You can even get a version of FRR on pfSense which is a free and open source firewall.
|
||||
|
||||
We need to keep in mind that reference implementations are important. Open source allows you to test at scale. But vendors don’t allow you to do that. However, with FRR, we have the ability to spin up virtual machines (VMs) or even containers by using software like Vagrant to test your network. Some vendors do offer software versions, but they are not fully feature-compatible.
|
||||
|
||||
Also, with open source you do not need to wait. This empowers you with flexibility and speed which drives the modern data center.
|
||||
|
||||
### Deep dive on FRRouting (FRR)
|
||||
|
||||
FRR is a Linux foundation project. In a technical Linux sense, FRR is a group of daemons that work together, providing a complete routing suite that includes BGP, IS-IS, LDP, OSPF, BFD, PIM, and RIP.
|
||||
|
||||
Each one of these daemons communicate with the common routing information base (RIB) daemon called Zebra in order to interface with the OS and to resolve conflicts between the multiple routing protocols providing the same information. Interfacing with the OS is used to receive the link up/down events, to add and delete routes etc.
|
||||
|
||||
### FRRouting (FRR) components: Zebra
|
||||
|
||||
Zebra is the RIB of the routing systems. It knows everything about the state of the system relevant to routing and is able to pass and disseminate this information to all the interested parties.
|
||||
|
||||
The RIB in FRR acts just like a traditional RIB. When a route wins, it goes into the Linux kernel data plane where the forwarding occurs. All of the routing protocols run as separate processes and each of them have their source code in FRR.
|
||||
|
||||
For example, when BGP starts up, it needs to know, for instance, what kind of virtual routing and forwarding (VRF) and IP interfaces are available. Zebra collects and passes this information back to the interested daemons. It passes all the relevant information about state of the machine.
|
||||
|
||||
Furthermore, you can also register information with Zebra. For example, if a particular route changes, the daemon can be informed. This can also be used for reverse path forwarding (RPF). FRR doesn't need to do a pull when changes happen on the network.
|
||||
|
||||
There are a myriad of ways through which you can control Linux and the state. Sometimes you have to use options like the Netlink bus and sometimes you may need to read the state in proc file system of Linux. The goal of Zebra is to gather all this data for the upper level protocols.
|
||||
|
||||
### FRR supports remote data planes
|
||||
|
||||
FRR also has the ability to manage the remote data planes. So, what does this mean? Typically, the data forwarding plane and the routing protocols run on the same box. Another model, adopted by Openflow and SDN for example, is one in which the data forwarding plane can be on one box while FRR runs on a different box on behalf of the first box and pushes the computed routing state on the first box. In other words, the data plane and the control plane run on different boxes.
|
||||
|
||||
If you examine the traditional world, it’s like having one large chassis with different line cards with the ability to install routes in those different line cards. FRR operates with the same model which has one control plane and the capability to offer 3 boxes, if needed. It does this via the forwarding plane manager.
|
||||
|
||||
### Forwarding plane manager
|
||||
|
||||
Zebra can either install routes directly into the data plane of the box it is running on or use a forwarding plane manager to install routes on a remote box. When it installs a route, the forwarding plane manager abstracts the data which displays the route and the next hops. It then pushes the data to a remote system where the remote machine processes it and programs the ASIC appropriately.
|
||||
|
||||
After the data is abstracted, you can use whatever protocol you want in order to push the data to the remote machine. You can even include the data in an email.
|
||||
|
||||
### What is holding people back from open source?
|
||||
|
||||
Since last 30 years the networking world meant that you need to go to a vendor to solve a problem. But now with open-source routing suites, such as, FRR, there is a major drift in the mindset as to how you approach troubleshooting.
|
||||
|
||||
This causes the fear of not being able to use it properly because with open source you are the one who has to fix it. This at first can be scary and daunting. But it doesn’t necessarily have to be. Also, to switch to FRR on a traditional network gear, you need the vendor to enable it, but they may be reluctant as they are on competing platforms which can be another road blocker.
|
||||
|
||||
### The future of FRR
|
||||
|
||||
If we examine FRR from the use case perspective of the data center, FRR is feature-complete. Anyone building an IP based data center FRR has everything available. The latest 7.0 release of FRR adds Yang/NetConf, BGP Enhancements and OpenFabric.
|
||||
|
||||
FRR is not just about providing features, boosting the performance or being the same as or better than the traditional network vendor’s software, it is also about simplifying the process for the end user.
|
||||
|
||||
Since the modern data center is focused on automation and ease of use, FRR has made such progress that the vendors have not caught up with. FRR is very automation friendly. For example, FRR takes BGP and makes it automation-friendly without having to change the protocol. It supports BGP unnumbered that is unmatched by any other vendor suite. This is where the vendors are trying to catch up.
|
||||
|
||||
Also, while troubleshooting, FRR shows peer’s and host’s names and not just the IP addresses. This allows you to understand without having spent much time. However, vendors show the peer’s IP addresses which can be daunting when you need to troubleshoot.
|
||||
|
||||
FRR provides the features that you need to run an efficient network and data center. It makes easier to configure and manage the IP routing suite. Vendors just add keep adding features over features whether they are significant or not. Then you need to travel the certification paths that teach you how to twiddle 20 million nobs. How many of those networks are robust and stable?
|
||||
|
||||
FRR is about supporting features that matter and not every imaginable feature. FRR is an open source project that brings like-minded people together, good work that is offered isn’t turned away. As a case in point, FRR has an open source implementation of EIGRP.
|
||||
|
||||
The problem surfaces when you see a bunch of things, you think you need them. But in reality, you should try to keep the network as simple as possible. FRR is laser-focused on the ease of use and simplifying the use rather than implementing features that are mostly not needed to drive the modern data center.
|
||||
|
||||
For more information and to contribute, why not join the [FRR][4] [mailing list group][4].
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][5]**
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/12/modular_humanoid_polyhedra_connections_structure_building_networking_by_fdecomite_cc_by_2-0_via_flickr_1200x800-100782334-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
|
||||
[3]: https://frrouting.org/community/7.0-launch.html
|
||||
[4]: https://frrouting.org/#participate
|
||||
[5]: /contributor-network/signup.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,119 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enterprise IoT: Companies want solutions in these 4 areas)
|
||||
[#]: via: (https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Enterprise IoT: Companies want solutions in these 4 areas
|
||||
======
|
||||
Based on customer pain points, PwC identified four areas companies are seeking enterprise solutions for, including energy use and sustainability.
|
||||
![Jackie Niam / Getty Images][1]
|
||||
|
||||
Internet of things (IoT) vendors and pundits like to crow about the billions and billions of connected devices that make the IoT so ubiquitous and powerful. But how much of that installed base is really relevant to the enterprise?
|
||||
|
||||
To find out, I traded emails with Rob Mesirow, principal at [PwC’s Connected Solutions][2], the firm’s new one-stop-shop of IoT solutions, who suggests that consumer adoption may not paint a true picture of the enterprise opportunities. If you remove the health trackers and the smart thermostats from the market, he suggested, there are very few connected devices left.
|
||||
|
||||
So, I wondered, what is actually happening on the enterprise side of IoT? What kinds of devices are we talking about, and in what kinds of numbers?
|
||||
|
||||
**[ Read also:[Forget 'smart homes,' the new goal is 'autonomous buildings'][3] ]**
|
||||
|
||||
“When people talk about the IoT,” Mesirow told me, “they usually focus on [consumer devices, which far outnumber business devices][4]. Yet [connected buildings currently represent only 12% of global IoT projects][5],” he noted, “and that’s without including wearables and smart home projects.” (Mesirow is talking about buildings that “use various IoT devices, including occupancy sensors that determine when people are present in a room in order to keep lighting and temperature controls at optimal levels, lowering energy costs and aiding sustainability goals. Sensors can also detect water and gas leaks and aid in predictive maintenance for HVAC systems.”)
|
||||
|
||||
### 4 key enterprise IoT opportunities
|
||||
|
||||
More specifically, based on customer pain points, PwC’s Connected Solutions is focusing on a few key opportunities, which Mesirow laid out in a [blog post][6] earlier this year. (Not surprisingly, the opportunities seem tied to [the group’s products][7].)
|
||||
|
||||
“A lot of these solutions came directly from our customers’ request,” he noted. “We pre-qualify our solutions with customers before we build them.”
|
||||
|
||||
Let’s take a look at the top four areas, along with a quick reality check on how important they are and whether the technology is ready for prime time.
|
||||
|
||||
#### **1\. Energy use and sustainability**
|
||||
|
||||
The IoT makes it possible to manage buildings and spaces more efficiently, with savings of 25% or more. Occupancy sensors can tell whether anyone is actually in a room, adjusting lighting and temperature to saving money and conserve energy.
|
||||
|
||||
Connected buildings can also help determine when meeting spaces are available, which can boost occupancy at large businesses and universities by 40% while cutting infrastructure and maintenance costs. Other sensors, meanwhile, can detect water and gas leaks and aid in predictive maintenance for HVAC systems.
|
||||
|
||||
**Reality check:** Obviously, much of this technology is not new, but there’s a real opportunity to make it work better by integrating disparate systems and adding better analytics to the data to make planning more effective.
|
||||
|
||||
#### **2. Asset tracking
|
||||
|
||||
**
|
||||
|
||||
“Businesses can also use the IoT to track their assets,“ Mesirow told me, “which can range from trucks to hotel luggage carts to medical equipment. It can even assist with monitoring trash by alerting appropriate people when dumpsters need to be emptied.”
|
||||
|
||||
Asset trackers can instantly identify the location of all kinds of equipment (saving employee time and productivity), and they can reduce the number of lost, stolen, and misplaced devices and machines as well as provide complete visibility into the location of your assets.
|
||||
|
||||
Such trackers can also save employees from wasting time hunting down the devices and machines they need. For example, PwC noted that during an average hospital shift, more than one-third of nurses spend at least an hour looking for equipment such as blood pressure monitors and insulin pumps. Just as important, location tracking often improves asset optimization, reduced inventory needs, and improved customer experience.
|
||||
|
||||
**Reality check:** Asset tracking offers clear value. The real question is whether a given use case is cost effective or not, as well as how the data gathered will actually be used. Too often, companies spend a lot of money and effort tracking their assets, but don’t do much with the information.
|
||||
|
||||
#### **3\. Security and compliance**
|
||||
|
||||
Connected solutions can create better working environments, Mesirow said. “In a hotel, for example, these smart devices can ensure that air and water quality is up to standards, provide automated pest traps, monitor dumpsters and recycling bins, detect trespassers, determine when someone needs assistance, or discover activity in an unauthorized area. Monitoring the water quality of hotel swimming pools can lower chemical and filtering costs,” he said.
|
||||
|
||||
Mesirow cited an innovative use case where, in response to workers’ complaints about harassment, hotel operators—in conjunction with the [American Hotel and Lodging Association][8]—are giving their employees portable devices that alert security staff when workers request assistance.
|
||||
|
||||
**Reality check:** This seems useful, but the ROI might be difficult to calculate.
|
||||
|
||||
#### **4\. Customer experience**
|
||||
|
||||
According to PwC, “Sensors, facial recognition, analytics, dashboards, and notifications can elevate and even transform the customer experience. … Using connected solutions, you can identify and reward your best customers by offering perks, reduced wait times, and/or shorter lines.”
|
||||
|
||||
Those kinds of personalized customer experiences can potentially boost customer loyalty and increase revenue, Mesirow said, adding that the technology can also make staff deployments more efficient and “enhance safety by identifying trespassers and criminals who are tampering with company property.”
|
||||
|
||||
**Reality check:** Creating a great customer experience is critical for businesses today, and this kind of personalized targeting promises to make it more efficient and effective. However, it has to be done in a way that makes customers comfortable and not creeped out. Privacy concerns are very real, especially when it comes to working with facial recognition and other kinds of surveillance technology. For example, [San Francisco recently banned city agencies from using facial recognition][9], and others may follow.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][10]
|
||||
* [What is edge computing and how it’s changing the network][11]
|
||||
* [Most powerful Internet of Things companies][12]
|
||||
* [10 Hot IoT startups to watch][13]
|
||||
* [The 6 ways to make money in IoT][14]
|
||||
* [What is digital twin technology? [and why it matters]][15]
|
||||
* [Blockchain, service-centric networking key to IoT success][16]
|
||||
* [Getting grounded in IoT networking and security][17]
|
||||
* [Building IoT-ready networks must become a priority][18]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][19]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_by_jackie_niam_gettyimages-996958260_2400x1600-100788446-large.jpg
|
||||
[2]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html#get-connected
|
||||
[3]: https://www.networkworld.com/article/3309420/forget-smart-homes-the-new-goal-is-autonomous-buildings.html
|
||||
[4]: https://www.statista.com/statistics/370350/internet-of-things-installed-base-by-category/)
|
||||
[5]: https://iot-analytics.com/top-10-iot-segments-2018-real-iot-projects/
|
||||
[6]: https://www.digitalpulse.pwc.com.au/five-unexpected-ways-internet-of-things/
|
||||
[7]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
|
||||
[8]: https://www.ahla.com/
|
||||
[9]: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
|
||||
[10]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[11]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[12]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[13]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[14]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[15]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[16]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[17]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -1,78 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Experts: Enterprise IoT enters the mass-adoption phase)
|
||||
[#]: via: (https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Experts: Enterprise IoT enters the mass-adoption phase
|
||||
======
|
||||
Dropping hardware prices, 5G boost business internet-of-things deployments; technical complexity encourages partnerships.
|
||||
![Avgust01 / Getty Images][1]
|
||||
|
||||
[IoT][2] in general has taken off quickly over the past few years, but experts at the recent IoT World highlighted that the enterprise part of the market has been particularly robust of late – it’s not just an explosion of connected home gadgets anymore.
|
||||
|
||||
Donna Moore, chairwoman of the LoRa Alliance, an industry group that works to develop and scale low-power WAN technology for mass usage, said on a panel that she’s never seen growth this fast in the sector. “I’d say we’re now in the early mass adopters [stage],” she said.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [Most powerful Internet of Things companies][3]
|
||||
* [10 Hot IoT startups to watch][4]
|
||||
* [The 6 ways to make money in IoT][5]
|
||||
* [What is digital twin technology? [and why it matters]][6]
|
||||
* [Blockchain, service-centric networking key to IoT success][7]
|
||||
* [Getting grounded in IoT networking and security][8]
|
||||
* [Building IoT-ready networks must become a priority][9]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][10]
|
||||
|
||||
|
||||
|
||||
The technology itself has pushed adoption to these heights, said Graham Trickey, head of IoT for the GSMA, a trade organization for mobile network operators. Along with price drops for wireless connectivity modules, the array of upcoming technologies nestling under the umbrella label of [5G][11] could simplify the process of connecting devices to [edge-computing][12] hardware – and the edge to the cloud or [data center][13].
|
||||
|
||||
“Mobile operators are not just providers of connectivity now, they’re farther up the stack,” he said. Technologies like narrow-band IoT and support for highly demanding applications like telehealth are all set to be part of the final 5G spec.
|
||||
|
||||
### Partnerships needed to deal with IoT complexity**
|
||||
|
||||
**
|
||||
|
||||
That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge.
|
||||
|
||||
“That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey.
|
||||
|
||||
Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said.
|
||||
|
||||
One of the central problems – [IoT security][14] – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities. More than one panelist praised the passage of the EU’s General Data Protection Regulation, saying that it offered concrete guidelines for entities developing IoT tech – a crucial consideration for some companies that may not have a lot of in-house expertise in that area.
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[11]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[12]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html?nsdr=true
|
||||
[13]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[14]: https://www.networkworld.com/article/3269736/getting-grounded-in-iot-networking-and-security.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -1,97 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever)
|
||||
[#]: via: (https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever
|
||||
======
|
||||
Burger King uses real-time IoT data to deliver burgers to drivers stuck in traffic — and it seems to be working.
|
||||
![Mike Mozart \(CC BY 2.0\)][1]
|
||||
|
||||
People love to eat in their cars. That’s why we invented the drive-in and the drive-thru.
|
||||
|
||||
But despite a fast-food outlet on the corner of every major intersection, it turns out we were only scratching the surface of this idea. Burger King is taking this concept to the next logical step with its new IoT-powered Traffic Jam Whopper project.
|
||||
|
||||
I have to admit, when I first heard about this, I thought it was a joke, but apparently the [Traffic Jam Whopper project is totally real][2] and has already passed a month-long test in Mexico City. While the company hasn’t specified a timeline, it plans to roll out the Traffic Jam Whopper project in Los Angeles (where else?) and other traffic-plagued megacities such as São Paulo and Shanghai.
|
||||
|
||||
**[ Also read:[Is IoT in the enterprise about making money or saving money?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
### How Burger King's Traffic Jam Whopper project works
|
||||
|
||||
According to [Nations Restaurant News][5], this is how Burger King's Traffic Jam Whopper project works:
|
||||
|
||||
The project uses real-time data to target hungry drivers along congested roads and highways for food delivery by couriers on motorcycles.
|
||||
|
||||
The system leverages push notifications to the Burger King app and personalized messaging on digital billboards positioned along busy roads close to a Burger King restaurant.
|
||||
|
||||
[According to the We Believers agency][6] that put it all together, “By leveraging traffic and drivers’ real-time data [location and speed], we adjusted our billboards’ location and content, displaying information about the remaining time in traffic to order, and personalized updates about deliveries in progress.” The menu is limited to Whopper Combos to speed preparation (though the company plans to offer a wider menu as it works out the kinks).
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]**
|
||||
|
||||
The company said orders in Mexico City were delivered in an average of 15 minutes. Fortunately (or unfortunately, depending on how you look at it) many traffic jams hold drivers captive for far longer than that.
|
||||
|
||||
Once the order is ready, the motorcyclist uses Google maps and GPS technology embedded into the app to locate the car that made the order. The delivery person then weaves through traffic to hand over the Whopper. (Lane-splitting is legal in California, but I have no idea if there are other potential safety or law-enforcement issues involved here. For drivers ordering burgers, at least, the Burger King app supports voice ordering. I also don’t know what happens if traffic somehow clears up before the burger arrives.)
|
||||
|
||||
Here’s a video of the pilot program in Mexico City:
|
||||
|
||||
#### **New technology = > new opportunities**
|
||||
|
||||
Even more amazing, this is not _just_ a publicity stunt. NRN quotes Bruno Cardinali, head of marketing for Burger King Latin America and Caribbean, claiming the project boosted sales during rush hour, when app orders are normally slow:
|
||||
|
||||
“Thanks to The Traffic Jam Whopper campaign, we’ve increased deliveries by 63% in selected locations across the month of April, adding a significant amount of orders per restaurant per day, just during rush hours."
|
||||
|
||||
If nothing else, this project shows that creative thinking really can leverage IoT technology into new businesses. In this case, it’s turning notoriously bad traffic—pretty much required for this process to work—from a problem into an opportunity to generate additional sales during slow periods.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][8]
|
||||
* [What is edge computing and how it’s changing the network][9]
|
||||
* [Most powerful Internet of Things companies][10]
|
||||
* [10 Hot IoT startups to watch][11]
|
||||
* [The 6 ways to make money in IoT][12]
|
||||
* [What is digital twin technology? [and why it matters]][13]
|
||||
* [Blockchain, service-centric networking key to IoT success][14]
|
||||
* [Getting grounded in IoT networking and security][15]
|
||||
* [Building IoT-ready networks must become a priority][16]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][17]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/burger-king-gift-card-100797164-large.jpg
|
||||
[2]: https://abc7news.com/food/burger-king-to-deliver-to-drivers-stuck-in-traffic/5299073/
|
||||
[3]: https://www.networkworld.com/article/3343917/the-big-picture-is-iot-in-the-enterprise-about-making-money-or-saving-money.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.nrn.com/technology/tech-tracker-burger-king-deliver-la-motorists-stuck-traffic?cid=
|
||||
[6]: https://www.youtube.com/watch?v=LXNgEZV7lNg
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[8]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[9]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[10]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[11]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[12]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[13]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[14]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[15]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[16]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[17]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[18]: https://www.facebook.com/NetworkWorld/
|
||||
[19]: https://www.linkedin.com/company/network-world
|
@ -1,55 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Benchmarks of forthcoming Epyc 2 processor leaked)
|
||||
[#]: via: (https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Benchmarks of forthcoming Epyc 2 processor leaked
|
||||
======
|
||||
Benchmarks of AMD's second-generation Epyc server briefly found their way online and show the chip is larger but a little slower than the Epyc 7601 on the market now.
|
||||
![Gordon Mah Ung][1]
|
||||
|
||||
Benchmarks of engineering samples of AMD's second-generation Epyc server, code-named “Rome,” briefly found their way online and show a very beefy chip running a little slower than its predecessor.
|
||||
|
||||
Rome is based on the Zen 2 architecture, believed to be more of an incremental improvement over the prior generation than a major leap. It’s already known that Rome would feature a 64-core, 128-thread design, but that was about all of the details.
|
||||
|
||||
**[ Also read:[Who's developing quantum computers][2] ]**
|
||||
|
||||
The details came courtesy of SiSoftware's Sandra PC analysis and benchmarking tool. It’s very popular and has been used by hobbyists and benchmarkers alike for more than 20 years. New benchmarks are uploaded to the Sandra database all the time, and what I suspect happened is someone running a Rome sample ran the benchmark, not realizing the results would be uploaded to the Sandra database.
|
||||
|
||||
The benchmarks were from two different servers, a Dell PowerEdge R7515 and a Super Micro Super Server. The Dell product number is not on the market, so this would indicate a future server with Rome processors. The entry has since been deleted, but several sites, including the hobbyist site Tom’s Hardware Guide, managed to [take a screenshot][3].
|
||||
|
||||
According to the entry, the chip is a mid-range processor with a base clock speed of 1.4GHz, jumping up to 2.2GHz in turbo mode, with 16MB of Level 2 cache and 256MB of Level 3 cache, the latter of which is crazy. The first-generation Epyc had just 32MB of L3 cache.
|
||||
|
||||
That’s a little slower than the Epyc 7601 on the market now, but when you double the number of cores in the same space, something’s gotta give, and in this case, it’s electricity. The thermal envelope was not revealed by the benchmark. Previous Epyc processors ranged from 120 watts to 180 watts.
|
||||
|
||||
Sandra ranked the processor at #3 for arithmetic and #5 for multimedia processing, which makes me wonder what on Earth beat the Rome chip. Interestingly, the servers were running Windows 10, not Windows Server 2019.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
|
||||
|
||||
Rome is expected to be officially launched at the massive Computex trade show in Taiwan on May 27 and will begin shipping in the third quarter of the year.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/11/rome_2-100779395-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[3]: https://www.tomshardware.co.uk/amd-epyc-rome-processor-data-center,news-60265.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Edge-based caching and blockchain-nodes speed up data transmission)
|
||||
[#]: via: (https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Edge-based caching and blockchain-nodes speed up data transmission
|
||||
======
|
||||
Using a combination of edge-based data caches and blockchain-like distributed networks, Bluzelle claims it can significantly speed up the delivery of data across the globe.
|
||||
![OlgaSalt / /getty][1]
|
||||
|
||||
The combination of a blockchain-like distributed network, along with the ability to locate data at the edge will massively speed up future networks, such as those used by the internet of things (IoT), claims Bluzelle in announcing what is says is the first decentralized data delivery network (DDN).
|
||||
|
||||
Distributed DDNs will be like content delivery networks (CDNs) that now cache content around the world to speed up the web, but in this case, it will be for data, the Singapore-based company explains. Distributed key-value (blockchain) networks and edge computing built into Bluzelle's system will provide significantly faster delivery than existing caching, the company claims in a press release announcing its product.
|
||||
|
||||
“The future of data delivery can only ever be de-centrally distributed,” says Pavel Bains, CEO and co-founder of Bluzelle. It’s because the world requires instant access to data that’s being created at the edge, he argues.
|
||||
|
||||
“But delivery is hampered by existing technology,” he says.
|
||||
|
||||
**[ Also read:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3]. ]**
|
||||
|
||||
Bluzelle says decentralized caching is the logical next step to generalized data caching, used for reducing latency. “Decentralized caching expands the theory of caching,” the company writes in a [report][4] (Dropbox pdf) on its [website][5]. It says the cache must be expanded from simply being located at one unique location.
|
||||
|
||||
“Using a combination of distributed networks, the edge and the cloud, [it’s] thereby increasing the transactional throughput of data,” the company says.
|
||||
|
||||
This kind of thing is particularly important in consumer gaming now, where split-second responses from players around the world make or break a game experience, but it will likely be crucial for the IoT, higher-definition media, artificial intelligence, and virtual reality as they gain more of a role in digitization—including at critical enterprise applications.
|
||||
|
||||
“Currently applications are limited to data caching technologies that require complex configuration and management of 10-plus-year-old technology constrained to a few data centers,” Bains says. “These were not designed to handle the ever-increasing volumes of data.”
|
||||
|
||||
Bains says one of the key selling points of Bluzelle's network is that developers should be able to implement and run networks without having to also physically expand the networks manually.
|
||||
|
||||
“Software developers don’t want to react to where their customers come from. Our architecture is designed to always have the data right where the customer is. This provides a superior consumer experience,” he says.
|
||||
|
||||
Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says.
|
||||
|
||||
The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache.
|
||||
|
||||
An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][3]
|
||||
* [Edge computing best practices][6]
|
||||
* [How edge computing can help secure the IoT][7]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/blockchain_crypotocurrency_bitcoin-by-olgasalt-getty-100787949-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[4]: https://www.dropbox.com/sh/go5bnhdproy1sk5/AAC5MDoafopFS7lXUnmiLAEFa?dl=0&preview=Bluzelle+Report+-+The+Decentralized+Internet+Is+Here.pdf
|
||||
[5]: https://bluzelle.com/
|
||||
[6]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[7]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,80 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Online performance benchmarks all companies should try to achieve)
|
||||
[#]: via: (https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
Online performance benchmarks all companies should try to achieve
|
||||
======
|
||||
With digital performance more important than ever, companies must ensure their online performance meets customers’ needs. A new ThousandEyes report can help them determine that.
|
||||
![Thinkstock][1]
|
||||
|
||||
There's no doubt about it: We have entered the experience economy, and digital performance is more important than ever.
|
||||
|
||||
Customer experience is the top brand differentiator, topping price and every other factor. And businesses that provide a poor digital experience will find customers will actively seek a competitor. In fact, recent ZK Research found that in 2018, about two-thirds of millennials changed loyalties to a brand because of a bad experience. (Note: I am an employee of ZK Research.)
|
||||
|
||||
To help companies determine if their online performance is leading, lacking, or on par with some of the top companies, ThousandEyes this week released its [2019 Digital Experience Performance Benchmark Report][2]. This document provides a comparative analysis of web, infrastructure, and network performance from the top 20 U.S. digital retail, travel, and media websites. Although this is a small sampling of companies, those three industries are the most competitive when it comes to using their digital platforms for competitive advantage. The aggregated data from this report can be used as an industry-agnostic performance benchmark that all companies should strive to meet.
|
||||
|
||||
**[ Read also:[IoT providers need to take responsibility for performance][3] ]**
|
||||
|
||||
The methodology of the study was for ThousandEyes to use its own platform to provide an independent view of performance. It uses active monitoring and a global network of monitoring agents to measure application and network layer performance for websites, applications, and services. The company collected data from 36 major cities scattered across the U.S. Six of the locations (Ashburn, Chicago, Dallas, Los Angeles, San Jose, and Seattle) also included vantage points connected to six major broadband ISPs (AT&T, CenturyLink, Charter, Comcast, Cox, and Verizon). This acts as a good proxy for what a user would experience.
|
||||
|
||||
The test involved page load tests against the websites of the major companies in retail, media, and travel and looked at several factors, including DNS response time, round-trip latency, network time (one-way latency), HTTP response time, and page load. The averages and median times can be seen in the table below. Those can be considered the average benchmarks that all companies should try to attain.
|
||||
|
||||
![][4]
|
||||
|
||||
### Choice of content delivery network matters by location
|
||||
|
||||
ThousandEyes' report also looked at how the various services that companies use impacts web performance. For example, the study measured the performance of the content delivery network (CDN) providers in the 36 markets. It found that in Albuquerque, Akamai and Fastly had the most latency, whereas Edgecast had the least. It also found that in Boston, all of the CDN providers were close. Companies can use this type of data to help them select a CDN. Without it, decision makers are essentially guessing and hoping.
|
||||
|
||||
### CDN performance is impacted by ISP
|
||||
|
||||
Another useful set of data was cross-referencing CDN performance by ISP, which lead to some fascinating information. With Comcast, Akamai, Cloudfront, Google and Incapula all had high amounts of latency. Only Edgecast and Fastly offered average latency. On the other hand, all of the CDNs worked great with CenturyLink. This tells a buyer, "If my customer base is largely in Comcast’s footprint, I should look at Edgecast or Fastly or my customers will be impacted."
|
||||
|
||||
### DNS and latency directly impact page load times
|
||||
|
||||
The ThousandEyes study also confirmed some points that many people believe as true but until now had no quantifiable evidence to support it. For example, it's widely accepted that DNS response time and network latency to the CDN edge correlate to web performance; the data in the report now supports that belief. ThousandEyes did some regression analysis and fancy math and found that in general, companies that were in the top quartile of HTTP performance had above-average DNS response time and network performance. There were a few exceptions, but in most cases, this is true.
|
||||
|
||||
Based on all the data, the below are the benchmarks for the three infrastructure metrics gathered and is what businesses, even ones outside the three verticals studied, should hope to achieve to support a high-quality digital experience.
|
||||
|
||||
* DNS response time 25 ms
|
||||
* Round trip network latency 15 ms
|
||||
* HTTP response time 250 ms
|
||||
|
||||
|
||||
|
||||
### Operations teams need to focus on digital performance
|
||||
|
||||
Benchmarking certainly provides value, but the report also offers some recommendations on how operations teams can use the data to improve digital performance. Those include:
|
||||
|
||||
* **Measure site from distributed user vantage points**. There is no single point that will provide a view of digital performance everywhere. Instead, measure from a range of ISPs in different regions and take a multi-layered approach to visibility (application, network and routing).
|
||||
* **Use internet performance information as a baseline**. Compare your organization's data to the baselines, and if you’re not meeting it in some markets, focus on improvement there.
|
||||
* **Compare performance to industry peers**. In highly competitive industries, it’s important to understand how you rank versus the competition. Don’t be satisfied with hitting the benchmarks if your key competitors exceed them.
|
||||
* **Build a strong performance stack.** The data shows that solid DNS and HTTP response times and low latency are correlated to solid page load times. Focus on optimizing those factors and consider them foundational to digital performance.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/07/racing_speed_runners_internet-speed-100728363-large.jpg
|
||||
[2]: https://www.thousandeyes.com/research/digital-experience
|
||||
[3]: https://www.networkworld.com/article/3340318/iot-providers-need-to-take-responsibility-for-performance.html
|
||||
[4]: https://images.idgesg.net/images/article/2019/05/thousandeyes-100797290-large.jpg
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Study: Most enterprise IoT transactions are unencrypted)
|
||||
[#]: via: (https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html)
|
||||
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
|
||||
|
||||
Study: Most enterprise IoT transactions are unencrypted
|
||||
======
|
||||
A Zscaler report finds 91.5% of IoT communications within enterprises are in plaintext and so susceptible to interference.
|
||||
![HYWARDS / Getty Images][1]
|
||||
|
||||
Of the millions of enterprise-[IoT][2] transactions examined in a recent study, the vast majority were sent without benefit of encryption, leaving the data vulnerable to theft and tampering.
|
||||
|
||||
The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with [SSL][3]. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed.
|
||||
|
||||
**[ For more on IoT security, see[our corporate guide to addressing IoT security concerns][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns.
|
||||
|
||||
The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).
|
||||
|
||||
While they represented only 8 percent of the devices, data-collection terminals generated 80 percent of the traffic.
|
||||
|
||||
The breakdown is that 18 percent of the IoT devices use SSL to communicate all the time, and of the remaining 82 percent, half used it part of the time and half never used it.
|
||||
The study also found cases of plaintext HTTP being used to authenticate devices and to update software and firmware, as well as use of outdated crypto libraries and weak default credentials.
|
||||
|
||||
While IoT devices are common in enterprises, “many of the devices are employee owned, and this is just one of the reasons they are a security concern,” the report says. Without strict policies and enforcement, these devices represent potential vulnerabilities.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
|
||||
|
||||
Another reason employee-owned IoT devices are a concern is that many businesses don’t consider them a threat because no data is stored on them. But if the data they gather is transmitted insecurely, it is at risk.
|
||||
|
||||
### 5 tips to protect enterprise IoT
|
||||
|
||||
Zscaler recommends these security precautions:
|
||||
|
||||
* Change default credentials to something more secure. As employees bring in devices, encourage them to use strong passwords and to keep their firmware current.
|
||||
* Isolate IoT devices on networks and restrict inbound and outbound network traffic.
|
||||
* Restrict access to IoT devices from external networks and block unnecessary ports from external access.
|
||||
* Apply regular security and firmware updates to IoT devices, and secure network traffic.
|
||||
* Deploy tools to gain visibility of shadow-IoT devices already inside the network so they can be protected.
|
||||
|
||||
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is edge computing and how it’s changing the network][7]
|
||||
* [Most powerful Internet of Things companies][8]
|
||||
* [10 Hot IoT startups to watch][9]
|
||||
* [The 6 ways to make money in IoT][10]
|
||||
* [What is digital twin technology? [and why it matters]][11]
|
||||
* [Blockchain, service-centric networking key to IoT success][12]
|
||||
* [Getting grounded in IoT networking and security][13]
|
||||
* [Building IoT-ready networks must become a priority][14]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][15]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html
|
||||
|
||||
作者:[Tim Greene][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Tim-Greene/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/network_security_network_traffic_scanning_by_hywards_gettyimages-673891964_2400x1600-100796830-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3045953/5-things-you-need-to-know-about-ssl.html
|
||||
[4]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[13]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[14]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[15]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
@ -1,680 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Analysing D Code with KLEE)
|
||||
[#]: via: (https://theartofmachinery.com/2019/05/28/d_and_klee.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Analysing D Code with KLEE
|
||||
======
|
||||
|
||||
[KLEE][1] is symbolic execution engine that can rigorously verify or find bugs in software. It’s designed for C and C++, but it’s just an interpreter for LLVM bitcode combined with theorem prover backends, so it can work with bitcode generated by `ldc2`. One catch is that it needs a compatible bitcode port of the D runtime to run normal D code. I’m still interested in getting KLEE to work with normal D code, but for now I’ve done some experiments with `-betterC` D.
|
||||
|
||||
### How KLEE works
|
||||
|
||||
What makes KLEE special is its support for two kinds of variables: concrete and symbolic. Concrete variables are just like the normal variables in normal code: they have a deterministic value at any given point in the program. On the other hand, symbolic variables contain a bundle of logical constraints instead of values. Take this code:
|
||||
|
||||
```
|
||||
int x = klee_int("x");
|
||||
klee_assume(x >= 0);
|
||||
if (x > 42)
|
||||
{
|
||||
doA(x);
|
||||
}
|
||||
else
|
||||
{
|
||||
doB(x);
|
||||
assert (3 * x != 21);
|
||||
}
|
||||
```
|
||||
|
||||
`klee_int("x")` creates a symbolic integer that will be called “`x`” in output reports. Initially it has no contraints and can imply any value that a 32b signed integer can have. `klee_assume(x >= 0)` tells KLEE to add `x >= 0` as a constraint, so now we’re only analysing the code for non-negative 32b signed integers. On hitting the `if`, KLEE checks if both branches are possible. Sure enough, `x > 42` can be true or false even with the constraint `x >= 0`, so KLEE has to _fork_. We now have two processes being interpreted on the VM: one executing `doA()` while `x` holds the constraints `x >= 0, x > 42`, and another executing `doB()` while `x` holds the contraints `x >= 0, x <= 42`. The second process will hit the `assert` statement, and KLEE will try to prove or disprove `3 * x != 21` using the assumptions `x >= 0, x <= 42` — in this case it will disprove it and report a bug with `x = 7` as a crashing example.
|
||||
|
||||
### First steps
|
||||
|
||||
Here’s a toy example just to get things working. Suppose we have a function that makes an assumption for a performance optimisation. Thankfully the assumption is made explicit with `assert` and is documented with a comment. Is the assumption valid?
|
||||
|
||||
```
|
||||
int foo(int x)
|
||||
{
|
||||
// 17 is a prime number, so let's use it as a sentinel value for an awesome optimisation
|
||||
assert (x * x != 17);
|
||||
// ...
|
||||
return x;
|
||||
}
|
||||
```
|
||||
|
||||
Here’s a KLEE test rig. The KLEE function declarations and the `main()` entry point need to have `extern(C)` linkage, but anything else can be normal D code as long as it compiles under `-betterC`:
|
||||
|
||||
```
|
||||
extern(C):
|
||||
|
||||
int klee_int(const(char*) name);
|
||||
|
||||
int main()
|
||||
{
|
||||
int x = klee_int("x");
|
||||
foo(x);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
It turns out there’s just one (frustrating) complication with running `-betterC` D under KLEE. In D, `assert` is handled specially by the compiler. By default, it throws an `Error`, but for compatibility with KLEE, I’m using the `-checkaction=C` flag. In C, `assert` is usually a macro that translates to code that calls some backend implementation. That implementation isn’t standardised, so of course various C libraries work differently. `ldc2` actually has built-in logic for implementing `-checkaction=C` correctly depending on the C library used.
|
||||
|
||||
KLEE uses a port of [uClibc][2], which translates `assert()` to a four-parameter `__assert()` function, which conflicts with the three-parameter `__assert()` function in other implementations. `ldc2` uses LLVM’s (target) `Triple` type for choosing an `assert()` implementation configuration, but that doesn’t recognise uClibc. As a hacky workaround, I’m telling `ldc2` to compile for Musl, which “tricks” it into using an `__assert_fail()` implementation that KLEE happens to support as well. I’ve opened [an issue report][3].
|
||||
|
||||
Anyway, if we put all that code above into a file, we can compile it to KLEE-ready bitcode like this:
|
||||
|
||||
```
|
||||
ldc2 -g -checkaction=C -mtriple=x86_64-linux-musl -output-bc -betterC -c first.d
|
||||
```
|
||||
|
||||
`-g` is optional, but adds debug information that can be useful for later analysis. The KLEE developers recommend disabling compiler optimisations and letting KLEE do its own optimisations instead.
|
||||
|
||||
Now to run KLEE:
|
||||
|
||||
```
|
||||
$ klee first.bc
|
||||
KLEE: output directory is "/tmp/klee-out-1"
|
||||
KLEE: Using Z3 solver backend
|
||||
warning: Linking two modules of different target triples: klee_int.bc' is 'x86_64-pc-linux-gnu' whereas 'first.bc' is 'x86_64--linux-musl'
|
||||
|
||||
KLEE: ERROR: first.d:4: ASSERTION FAIL: x * x != 17
|
||||
KLEE: NOTE: now ignoring this error at this location
|
||||
|
||||
KLEE: done: total instructions = 35
|
||||
KLEE: done: completed paths = 2
|
||||
KLEE: done: generated tests = 2
|
||||
```
|
||||
|
||||
Straight away, KLEE has found two execution paths through the program: a happy path, and a path that fails the assertion. Let’s see the results:
|
||||
|
||||
```
|
||||
$ ls klee-last/
|
||||
assembly.ll
|
||||
info
|
||||
messages.txt
|
||||
run.istats
|
||||
run.stats
|
||||
run.stats-journal
|
||||
test000001.assert.err
|
||||
test000001.kquery
|
||||
test000001.ktest
|
||||
test000002.ktest
|
||||
warnings.txt
|
||||
```
|
||||
|
||||
Here’s the example that triggers the happy path:
|
||||
|
||||
```
|
||||
$ ktest-tool klee-last/test000002.ktest
|
||||
ktest file : 'klee-last/test000002.ktest'
|
||||
args : ['first.bc']
|
||||
num objects: 1
|
||||
object 0: name: 'x'
|
||||
object 0: size: 4
|
||||
object 0: data: b'\x00\x00\x00\x00'
|
||||
object 0: hex : 0x00000000
|
||||
object 0: int : 0
|
||||
object 0: uint: 0
|
||||
object 0: text: ....
|
||||
```
|
||||
|
||||
Here’s the example that causes an assertion error:
|
||||
|
||||
```
|
||||
$ cat klee-last/test000001.assert.err
|
||||
Error: ASSERTION FAIL: x * x != 17
|
||||
File: first.d
|
||||
Line: 4
|
||||
assembly.ll line: 32
|
||||
Stack:
|
||||
#000000032 in _D5first3fooFiZi () at first.d:4
|
||||
#100000055 in main (=1, =94262044506880) at first.d:16
|
||||
$ ktest-tool klee-last/test000001.ktest
|
||||
ktest file : 'klee-last/test000001.ktest'
|
||||
args : ['first.bc']
|
||||
num objects: 1
|
||||
object 0: name: 'x'
|
||||
object 0: size: 4
|
||||
object 0: data: b'\xe9&\xd33'
|
||||
object 0: hex : 0xe926d333
|
||||
object 0: int : 869476073
|
||||
object 0: uint: 869476073
|
||||
object 0: text: .&.3
|
||||
```
|
||||
|
||||
So, KLEE has deduced that when `x` is 869476073, `x * x` does a 32b overflow to 17 and breaks the code.
|
||||
|
||||
It’s overkill for this simple example, but `run.istats` can be opened with [KCachegrind][4] to view things like call graphs and source code coverage. (Unfortunately, coverage stats can be misleading because correct code won’t ever hit boundary check code inserted by the compiler.)
|
||||
|
||||
### MurmurHash preimage
|
||||
|
||||
Here’s a slightly more useful example. D currently uses 32b MurmurHash3 as its standard non-cryptographic hash function. What if we want to find strings that hash to a given special value? In general, we can solve problems like this by asserting that something doesn’t exist (i.e., a string that hashes to a given value) and then challenging the theorem prover to prove us wrong with a counterexample.
|
||||
|
||||
Unfortunately, we can’t just use `hashOf()` directly without the runtime, but we can copy [the hash code from the runtime source][5] into its own module, and then import it into a test rig like this:
|
||||
|
||||
```
|
||||
import dhash;
|
||||
|
||||
extern(C):
|
||||
|
||||
void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
|
||||
int klee_assume(ulong condition);
|
||||
|
||||
int main()
|
||||
{
|
||||
// Create a buffer for 8-letter strings and let KLEE manage it symbolically
|
||||
char[8] s;
|
||||
klee_make_symbolic(s.ptr, s.sizeof, "s");
|
||||
|
||||
// Constrain the string to be letters from a to z for convenience
|
||||
foreach (j; 0..s.length)
|
||||
{
|
||||
klee_assume(s[j] > 'a' && s[j] <= 'z');
|
||||
}
|
||||
|
||||
assert (dHash(cast(ubyte[])s) != 0xdeadbeef);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Here’s how to compile and run it. Because we’re not checking correctness, we can use `-boundscheck=off` for a slight performance boost. It’s also worth enabling KLEE’s optimiser.
|
||||
|
||||
```
|
||||
$ ldc2 -g -boundscheck=off -checkaction=C -mtriple=x86_64-linux-musl -output-bc -betterC -c dhash.d dhash_klee.d
|
||||
$ llvm-link -o dhash_test.bc dhash.bc dhash_klee.bc
|
||||
$ klee -optimize dhash_test.bc
|
||||
```
|
||||
|
||||
It takes just over 4s:
|
||||
|
||||
```
|
||||
$ klee-stats klee-last/
|
||||
-------------------------------------------------------------------------
|
||||
| Path | Instrs| Time(s)| ICov(%)| BCov(%)| ICount| TSolver(%)|
|
||||
-------------------------------------------------------------------------
|
||||
|klee-last/| 168| 4.37| 87.50| 50.00| 160| 99.95|
|
||||
-------------------------------------------------------------------------
|
||||
```
|
||||
|
||||
And it actually works:
|
||||
|
||||
```
|
||||
$ ktest-tool klee-last/test000001.ktest
|
||||
ktest file : 'klee-last/test000001.ktest'
|
||||
args : ['dhash_test.bc']
|
||||
num objects: 1
|
||||
object 0: name: 's'
|
||||
object 0: size: 8
|
||||
object 0: data: b'psgmdxvq'
|
||||
object 0: hex : 0x7073676d64787671
|
||||
object 0: int : 8175854546265273200
|
||||
object 0: uint: 8175854546265273200
|
||||
object 0: text: psgmdxvq
|
||||
$ rdmd --eval 'writef("%x\n", hashOf("psgmdxvq"));'
|
||||
deadbeef
|
||||
```
|
||||
|
||||
For comparison, here’s a simple brute force version in plain D:
|
||||
|
||||
```
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
char[8] buffer;
|
||||
|
||||
bool find(size_t idx)
|
||||
{
|
||||
if (idx == buffer.length)
|
||||
{
|
||||
auto hash = hashOf(buffer[]);
|
||||
if (hash == 0xdeadbeef)
|
||||
{
|
||||
writeln(buffer[]);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
foreach (char c; 'a'..'z')
|
||||
{
|
||||
buffer[idx] = c;
|
||||
auto is_found = find(idx + 1);
|
||||
if (is_found) return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
find(0);
|
||||
}
|
||||
```
|
||||
|
||||
This takes ~17s:
|
||||
|
||||
```
|
||||
$ ldc2 -O3 -boundscheck=off hash_brute.d
|
||||
$ time ./hash_brute
|
||||
aexkaydh
|
||||
|
||||
real 0m17.398s
|
||||
user 0m17.397s
|
||||
sys 0m0.001s
|
||||
$ rdmd --eval 'writef("%x\n", hashOf("aexkaydh"));'
|
||||
deadbeef
|
||||
```
|
||||
|
||||
The constraint solver implementation is simpler to write, but is still faster because it can automatically do smarter things than calculating hashes of strings from scratch every iteration.
|
||||
|
||||
### Binary search
|
||||
|
||||
Now for an example of testing and debugging. Here’s an implementation of [binary search][6]:
|
||||
|
||||
```
|
||||
bool bsearch(const(int)[] haystack, int needle)
|
||||
{
|
||||
while (haystack.length)
|
||||
{
|
||||
auto mid_idx = haystack.length / 2;
|
||||
if (haystack[mid_idx] == needle) return true;
|
||||
if (haystack[mid_idx] < needle)
|
||||
{
|
||||
haystack = haystack[mid_idx..$];
|
||||
}
|
||||
else
|
||||
{
|
||||
haystack = haystack[0..mid_idx];
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
Does it work? Here’s a test rig:
|
||||
|
||||
```
|
||||
extern(C):
|
||||
|
||||
void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
|
||||
int klee_range(int begin, int end, const(char*) name);
|
||||
int klee_assume(ulong condition);
|
||||
|
||||
int main()
|
||||
{
|
||||
// Making an array arr and an x to find in it.
|
||||
// This time we'll also parameterise the array length.
|
||||
// We have to apply klee_make_symbolic() to the whole buffer because of limitations in KLEE.
|
||||
int[8] arr_buffer;
|
||||
klee_make_symbolic(arr_buffer.ptr, arr_buffer.sizeof, "a");
|
||||
int len = klee_range(0, arr_buffer.length+1, "len");
|
||||
auto arr = arr_buffer[0..len];
|
||||
// Keeping the values in [0, 32) makes the output easier to read.
|
||||
// (The binary-friendly limit 32 is slightly more efficient than 30.)
|
||||
int x = klee_range(0, 32, "x");
|
||||
foreach (j; 0..arr.length)
|
||||
{
|
||||
klee_assume(arr[j] >= 0);
|
||||
klee_assume(arr[j] < 32);
|
||||
}
|
||||
|
||||
// Make the array sorted.
|
||||
// We don't have to actually sort the array.
|
||||
// We can just tell KLEE to constrain it to be sorted.
|
||||
foreach (j; 1..arr.length)
|
||||
{
|
||||
klee_assume(arr[j - 1] <= arr[j]);
|
||||
}
|
||||
|
||||
// Test against simple linear search
|
||||
bool has_x = false;
|
||||
foreach (a; arr[])
|
||||
{
|
||||
has_x |= a == x;
|
||||
}
|
||||
|
||||
assert (bsearch(arr, x) == has_x);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
When run in KLEE, it keeps running for a long, long time. How do we know it’s doing anything? By default KLEE writes stats every 1s, so we can watch the live progress in another terminal:
|
||||
|
||||
```
|
||||
$ watch klee-stats --print-more klee-last/
|
||||
Every 2.0s: klee-stats --print-more klee-last/
|
||||
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
| Path | Instrs| Time(s)| ICov(%)| BCov(%)| ICount| TSolver(%)| States| maxStates| Mem(MB)| maxMem(MB)|
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|klee-last/| 5834| 637.27| 79.07| 68.75| 172| 100.00| 22| 22| 24.51| 24|
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
```
|
||||
|
||||
`bsearch()` should be pretty fast, so we should see KLEE discovering new states rapidly. But instead it seems to be stuck. [At least one fork of KLEE has heuristics for detecting infinite loops][7], but plain KLEE doesn’t. There are timeout and batching options for making KLEE work better with code that might have infinite loops, but let’s just take another look at the code. In particular, the loop condition:
|
||||
|
||||
```
|
||||
while (haystack.length)
|
||||
{
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
Binary search is supposed to reduce the search space by about half each iteration. `haystack.length` is an unsigned integer, so the loop must terminate as long as it goes down every iteration. Let’s rewrite the code slightly so we can verify if that’s true:
|
||||
|
||||
```
|
||||
bool bsearch(const(int)[] haystack, int needle)
|
||||
{
|
||||
while (haystack.length)
|
||||
{
|
||||
auto mid_idx = haystack.length / 2;
|
||||
if (haystack[mid_idx] == needle) return true;
|
||||
const(int)[] next_haystack;
|
||||
if (haystack[mid_idx] < needle)
|
||||
{
|
||||
next_haystack = haystack[mid_idx..$];
|
||||
}
|
||||
else
|
||||
{
|
||||
next_haystack = haystack[0..mid_idx];
|
||||
}
|
||||
// This lets us verify that the search terminates
|
||||
assert (next_haystack.length < haystack.length);
|
||||
haystack = next_haystack;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
Now KLEE can find the bug!
|
||||
|
||||
```
|
||||
$ klee -optimize bsearch.bc
|
||||
KLEE: output directory is "/tmp/klee-out-2"
|
||||
KLEE: Using Z3 solver backend
|
||||
warning: Linking two modules of different target triples: klee_range.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
|
||||
|
||||
warning: Linking two modules of different target triples: memset.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
|
||||
|
||||
KLEE: ERROR: bsearch.d:18: ASSERTION FAIL: next_haystack.length < haystack.length
|
||||
KLEE: NOTE: now ignoring this error at this location
|
||||
|
||||
KLEE: done: total instructions = 2281
|
||||
KLEE: done: completed paths = 42
|
||||
KLEE: done: generated tests = 31
|
||||
```
|
||||
|
||||
Using the failing example as input and stepping through the code, it’s easy to find the problem:
|
||||
|
||||
```
|
||||
/// ...
|
||||
if (haystack[mid_idx] < needle)
|
||||
{
|
||||
// If mid_idx == 0, next_haystack is the same as haystack
|
||||
// Nothing changes, so the loop keeps repeating
|
||||
next_haystack = haystack[mid_idx..$];
|
||||
}
|
||||
/// ...
|
||||
```
|
||||
|
||||
Thinking about it, the `if` statement already excludes `haystack[mid_idx]` from being `needle`, so there’s no reason to include it in `next_haystack`. Here’s the fix:
|
||||
|
||||
```
|
||||
// The +1 matters
|
||||
next_haystack = haystack[mid_idx+1..$];
|
||||
```
|
||||
|
||||
But is the code correct now? Terminating isn’t enough; it needs to get the right answer, of course.
|
||||
|
||||
```
|
||||
$ klee -optimize bsearch.bc
|
||||
KLEE: output directory is "/tmp/kee-out-3"
|
||||
KLEE: Using Z3 solver backend
|
||||
warning: Linking two modules of different target triples: klee_range.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
|
||||
|
||||
warning: Linking two modules of different target triples: memset.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
|
||||
|
||||
KLEE: done: total instructions = 3152
|
||||
KLEE: done: completed paths = 81
|
||||
KLEE: done: generated tests = 81
|
||||
```
|
||||
|
||||
In just under 7s, KLEE has verified every possible execution path reachable with arrays of length from 0 to 8. Note, that’s not just coverage of individual code lines, but coverage of full pathways through the code. KLEE hasn’t ruled out stack corruption or integer overflows with large arrays, but I’m pretty confident the code is correct now.
|
||||
|
||||
KLEE has generated test cases that trigger each path, which we can keep and use as a faster-than-7s regression test suite. Trouble is, the output from KLEE loses all type information and isn’t in a convenient format:
|
||||
|
||||
```
|
||||
$ ktest-tool klee-last/test000042.ktest
|
||||
ktest file : 'klee-last/test000042.ktest'
|
||||
args : ['bsearch.bc']
|
||||
num objects: 3
|
||||
object 0: name: 'a'
|
||||
object 0: size: 32
|
||||
object 0: data: b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
|
||||
object 0: hex : 0x0000000000000000000000000000000001000000100000000000000000000000
|
||||
object 0: text: ................................
|
||||
object 1: name: 'x'
|
||||
object 1: size: 4
|
||||
object 1: data: b'\x01\x00\x00\x00'
|
||||
object 1: hex : 0x01000000
|
||||
object 1: int : 1
|
||||
object 1: uint: 1
|
||||
object 1: text: ....
|
||||
object 2: name: 'len'
|
||||
object 2: size: 4
|
||||
object 2: data: b'\x06\x00\x00\x00'
|
||||
object 2: hex : 0x06000000
|
||||
object 2: int : 6
|
||||
object 2: uint: 6
|
||||
object 2: text: ....
|
||||
```
|
||||
|
||||
But we can write our own pretty-printing code and put it at the end of the test rig:
|
||||
|
||||
```
|
||||
char[256] buffer;
|
||||
char* output = buffer.ptr;
|
||||
output += sprintf(output, "TestCase([");
|
||||
foreach (a; arr[])
|
||||
{
|
||||
output += sprintf(output, "%d, ", klee_get_value_i32(a));
|
||||
}
|
||||
sprintf(output, "], %d, %s),\n", klee_get_value_i32(x), klee_get_value_i32(has_x) ? "true".ptr : "false".ptr);
|
||||
fputs(buffer.ptr, stdout);
|
||||
```
|
||||
|
||||
Ugh, that would be just one format call with D’s `%(` array formatting specs. The output needs to be buffered up and printed all at once to stop output from different parallel executions getting mixed up. `klee_get_value_i32()` is needed to get a concrete example from a symbolic variable (remember that a symbolic variable is just a bundle of constraints).
|
||||
|
||||
```
|
||||
$ klee -optimize bsearch.bc > tests.d
|
||||
...
|
||||
$ # Sure enough, 81 test cases
|
||||
$ wc -l tests.d
|
||||
81 tests.d
|
||||
$ # Look at the first 10
|
||||
$ head tests.d
|
||||
TestCase([], 0, false),
|
||||
TestCase([0, ], 0, true),
|
||||
TestCase([16, ], 1, false),
|
||||
TestCase([0, ], 1, false),
|
||||
TestCase([0, 0, ], 0, true),
|
||||
TestCase([0, 0, ], 1, false),
|
||||
TestCase([1, 16, ], 1, true),
|
||||
TestCase([0, 0, 0, ], 0, true),
|
||||
TestCase([16, 16, ], 1, false),
|
||||
TestCase([1, 16, ], 3, false),
|
||||
```
|
||||
|
||||
Nice! An autogenerated regression test suite that’s better than anything I would write by hand. This is my favourite use case for KLEE.
|
||||
|
||||
### Change counting
|
||||
|
||||
One last example:
|
||||
|
||||
In Australia, coins come in 5c, 10c, 20c, 50c, $1 (100c) and $2 (200c) denominations. So you can make 70c using 14 5c coins, or using a 50c coin and a 20c coin. Obviously, fewer coins is usually more convenient. There’s a simple [greedy algorithm][8] to make a small pile of coins that adds up to a given value: just keep adding the biggest coin you can to the pile until you’ve reached the target value. It turns out this trick is optimal — at least for Australian coins. Is it always optimal for any set of coin denominations?
|
||||
|
||||
The hard thing about testing optimality is that you don’t know what the correct optimal values are without a known-good algorithm. Without a constraints solver, I’d compare the output of the greedy algorithm with some obviously correct brute force optimiser, run over all possible cases within some small-enough limit. But with KLEE, we can use a different approach: comparing the greedy solution to a non-deterministic solution.
|
||||
|
||||
The greedy algorithm takes the list of coin denominations and the target value as input, so (like in the previous examples) we make those symbolic. Then we make another symbolic array that represents an assignment of coin counts to each coin denomination. We don’t specify anything about how to generate this assignment, but we constrain it to be a valid assignment that adds up to the target value. It’s [non-deterministic][9]. Then we just assert that the total number of coins in the non-deterministic assignment is at least the number of coins needed by the greedy algorithm, which would be true if the greedy algorithm were universally optimal. Finally we ask KLEE to prove the program correct or incorrect.
|
||||
|
||||
Here’s the code:
|
||||
|
||||
```
|
||||
// Greedily break value into coins of values in denominations
|
||||
// denominations must be in strictly decreasing order
|
||||
int greedy(const(int[]) denominations, int value, int[] coins_used_output)
|
||||
{
|
||||
int num_coins = 0;
|
||||
foreach (j; 0..denominations.length)
|
||||
{
|
||||
int num_to_use = value / denominations[j];
|
||||
coins_used_output[j] = num_to_use;
|
||||
num_coins += num_to_use;
|
||||
value = value % denominations[j];
|
||||
}
|
||||
return num_coins;
|
||||
}
|
||||
|
||||
extern(C):
|
||||
|
||||
void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
|
||||
int klee_int(const(char*) name);
|
||||
int klee_assume(ulong condition);
|
||||
int klee_get_value_i32(int expr);
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
enum kNumDenominations = 6;
|
||||
int[kNumDenominations] denominations, coins_used;
|
||||
klee_make_symbolic(denominations.ptr, denominations.sizeof, "denominations");
|
||||
|
||||
// We're testing the algorithm itself, not implementation issues like integer overflow
|
||||
// Keep values small
|
||||
foreach (d; denominations)
|
||||
{
|
||||
klee_assume(d >= 1);
|
||||
klee_assume(d <= 1024);
|
||||
}
|
||||
// Make the smallest denomination 1 so that all values can be represented
|
||||
// This is just for simplicity so we can focus on optimality
|
||||
klee_assume(denominations[$-1] == 1);
|
||||
|
||||
// Greedy algorithm expects values in descending order
|
||||
foreach (j; 1..denominations.length)
|
||||
{
|
||||
klee_assume(denominations[j-1] > denominations[j]);
|
||||
}
|
||||
|
||||
// What we're going to represent
|
||||
auto value = klee_int("value");
|
||||
|
||||
auto num_coins = greedy(denominations[], value, coins_used[]);
|
||||
|
||||
// The non-deterministic assignment
|
||||
int[kNumDenominations] nd_coins_used;
|
||||
klee_make_symbolic(nd_coins_used.ptr, nd_coins_used.sizeof, "nd_coins_used");
|
||||
|
||||
int nd_num_coins = 0, nd_value = 0;
|
||||
foreach (j; 0..kNumDenominations)
|
||||
{
|
||||
klee_assume(nd_coins_used[j] >= 0);
|
||||
klee_assume(nd_coins_used[j] <= 1024);
|
||||
nd_num_coins += nd_coins_used[j];
|
||||
nd_value += nd_coins_used[j] * denominations[j];
|
||||
}
|
||||
|
||||
// Making the assignment valid is 100% up to KLEE
|
||||
klee_assume(nd_value == value);
|
||||
|
||||
// If we find a counterexample, dump it and fail
|
||||
if (nd_num_coins < num_coins)
|
||||
{
|
||||
import core.stdc.stdio;
|
||||
|
||||
puts("Counterexample found.");
|
||||
|
||||
puts("Denominations:");
|
||||
foreach (ref d; denominations)
|
||||
{
|
||||
printf("%d ", klee_get_value_i32(d));
|
||||
}
|
||||
printf("\nValue: %d\n", klee_get_value_i32(value));
|
||||
|
||||
void printAssignment(const ref int[kNumDenominations] coins)
|
||||
{
|
||||
foreach (j; 0..kNumDenominations)
|
||||
{
|
||||
printf("%d * %dc\n", klee_get_value_i32(coins[j]), klee_get_value_i32(denominations[j]));
|
||||
}
|
||||
}
|
||||
|
||||
printf("Greedy \"optimum\": %d\n", klee_get_value_i32(num_coins));
|
||||
printAssignment(coins_used);
|
||||
|
||||
printf("Better assignment for %d total coins:\n", klee_get_value_i32(nd_num_coins));
|
||||
printAssignment(nd_coins_used);
|
||||
assert (false);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
And here’s the counterexample it found after 14s:
|
||||
|
||||
```
|
||||
Counterexample found.
|
||||
Denominations:
|
||||
129 12 10 3 2 1
|
||||
Value: 80
|
||||
Greedy "optimum": 9
|
||||
0 * 129c
|
||||
6 * 12c
|
||||
0 * 10c
|
||||
2 * 3c
|
||||
1 * 2c
|
||||
0 * 1c
|
||||
Better assignment for 8 total coins:
|
||||
0 * 129c
|
||||
0 * 12c
|
||||
8 * 10c
|
||||
0 * 3c
|
||||
0 * 2c
|
||||
0 * 1c
|
||||
```
|
||||
|
||||
Note that this isn’t proven to be the new optimum; it’s just a witness that the greedy algorithm isn’t always optimal. There’s a well-known [dynamic programming][10] [solution][11] that always works.
|
||||
|
||||
### What’s next?
|
||||
|
||||
As I said, I’m interesting in getting this to work with full D code. I’m also interested in using [one of the floating point forks of KLEE][12] on some D because floating point is much harder to test thoroughly than integer and string code.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/05/28/d_and_klee.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://klee.github.io/
|
||||
[2]: https://www.uclibc.org/
|
||||
[3]: https://github.com/ldc-developers/ldc/issues/3078
|
||||
[4]: https://kcachegrind.github.io/html/Home.html
|
||||
[5]: https://github.com/dlang/druntime/blob/4ad638f61a9b4a98d8ed6eb9f9429c0ef6afc8e3/src/core/internal/hash.d#L670
|
||||
[6]: https://www.calhoun.io/lets-learn-algorithms-an-intro-to-binary-search/
|
||||
[7]: https://github.com/COMSYS/SymbolicLivenessAnalysis
|
||||
[8]: https://en.wikipedia.org/wiki/Greedy_algorithm
|
||||
[9]: http://people.clarkson.edu/~alexis/PCMI/Notes/lectureB03.pdf
|
||||
[10]: https://www.algorithmist.com/index.php/Dynamic_Programming
|
||||
[11]: https://www.topcoder.com/community/competitive-programming/tutorials/dynamic-programming-from-novice-to-advanced/
|
||||
[12]: https://github.com/srg-imperial/klee-float
|
@ -1,121 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managed WAN and the cloud-native SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Managed WAN and the cloud-native SD-WAN
|
||||
======
|
||||
The motivation for WAN transformation is clear, today organizations require: improved internet access and last mile connectivity, additional bandwidth and a reduction in the WAN costs.
|
||||
![Gerd Altmann \(CC0\)][1]
|
||||
|
||||
In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.
|
||||
|
||||
The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.
|
||||
|
||||
A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.
|
||||
|
||||
**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
|
||||
|
||||
### Motivations for transforming the WAN
|
||||
|
||||
The motivation for WAN transformation is clear: Today organizations prefer improved internet access and last mile connectivity, additional bandwidth along with a reduction in the WAN costs. Replacing Multiprotocol Label Switching (MPLS) with SD-WAN has of course been the main driver for the SD-WAN evolution, but it is only a single piece of the jigsaw puzzle.
|
||||
|
||||
Many SD-WAN vendors are quickly brought to their knees when they try to address security and gain direct internet access from remote site locations. The problem is how to ensure optimized cloud access that is secure, has improved visibility and predictable performance without the high costs associated with MPLS? SD-WAN is not just about connecting locations. Primarily, it needs to combine many other important network and security elements into one seamless worldwide experience.
|
||||
|
||||
According to a recent report from [Cato Networks][3] into enterprise IT managers, a staggering 85% will confront use cases in 2019 that are poorly addressed or outright ignored by SD-WAN. Examples includes providing secure, Internet access from any location (50%) and improving visibility into and control over mobile access to cloud applications, such as Office 365 (46%).
|
||||
|
||||
### Issues with traditional SD-WAN vendors
|
||||
|
||||
First and foremost, SD-WAN unable to address the security challenges that arise during the WAN transformation. Such security challenges include protection against malware, ransomware and implementing the necessary security policies. Besides, there is a lack of visibility that is required to police the mobile users and remote site locations accessing resources in the public cloud.
|
||||
|
||||
To combat this, organizations have to purchase additional equipment. There has always been and will always be a high cost associated with buying such security appliances. Furthermore, the additional tools that are needed to protect the remote site locations increase the network complexity and reduce visibility. Let’s us not forget that the variety of physical appliances require talented engineers for design, deployment and maintenance.
|
||||
|
||||
There will often be a single network-cowboy. This means the network and security configuration along with the design essentials are stored in the mind of the engineer, not in a central database from where the knowledge can be accessed if the engineer leaves his or her employment.
|
||||
|
||||
The physical appliance approach to SD-WAN makes it hard, if not impossible, to accommodate for the future. If the current SD-WAN vendors continue to focus just on connecting the devices with the physical appliances, they will have limited ability to accommodate for example, with the future of network IoT devices. With these factors in mind what are the available options to overcome the SD-WAN shortcomings?
|
||||
|
||||
One can opt for a do it yourself (DIY) solution, or a managed service, which can fall into the category of telcos, with the improvements of either co-managed or self-managed service categories.
|
||||
|
||||
### Option 1: The DIY solution
|
||||
|
||||
Firstly DIY, from the experience of trying to stitch together a global network, this is not only costly but also complex and is a very constrained approach to the network transformation. We started with physical appliances decades ago and it was sufficient to an extent. The reason it worked was that it suited the requirements of the time, but our environment has changed since then. Hence, we need to accommodate these changes with the current requirements.
|
||||
|
||||
Even back in those days, we always had a breachable perimeter. The perimeter-approach to networking and security never really worked and it was just a matter of time before the bad actor would penetrate the guarded walls.
|
||||
|
||||
Securing a global network involves more than just firewalling the devices. A solid security perimeter requires URL filtering, anti-malware and IPS to secure the internet traffic. If you try to deploy all these functions in a single device, such as, unified threat management (UTM), you will hit scaling problems. As a result, you will be left with appliance sprawl.
|
||||
|
||||
Back in my early days as an engineer, I recall stitching together a global network with a mixture of security and network appliances from a variety of vendors. It was me and just two others who used to get the job done on time and for a production network, our uptime levels were superior to most.
|
||||
|
||||
However, it involved too many late nights, daily flights to our PoPs and of course the major changes required a forklift. A lot of work had to be done at that time, which made me want to push some or most of the work to a 3rd party.
|
||||
|
||||
### Option 2: The managed service solution
|
||||
|
||||
Today, there is a growing need for the managed service approach to SD-WAN. Notably, it simplifies the network design, deployment and maintenance activities while offloading the complexity, in line with what most CIOs are talking about today.
|
||||
|
||||
Managed service provides a number of benefits, such as the elimination of backhauling to centralized cloud connectors or VPN concentrators. Evidently, backhauling is never favored for a network architect. More than often it will result in increased latency, congested links, internet chokepoints, and last-mile outages.
|
||||
|
||||
Managed service can also authenticate mobile users at the local communication hub and not at a centralized point which would increase the latency. So what options are available when considering a managed service?
|
||||
|
||||
### Telcos: An average service level
|
||||
|
||||
Let’s be honest, telcos have a mixed track record and enterprises rely on them with caution. Essentially, you are building a network with 3rd party appliances and services that put the technical expertise outside of the organization.
|
||||
|
||||
Secondly, the telco must orchestrate, monitor and manage numerous technical domains which are likely to introduce further complexity. As a result, troubleshooting requires close coordination with the suppliers which will have an impact on the customer experience.
|
||||
|
||||
### Time equals money
|
||||
|
||||
To resolve a query could easily take two or three attempts. It’s rare that you will get to the right person straight away. This eventually increases the time to resolve problems. Even for a minor feature change, you have to open tickets. Hence, with telcos, it increases the time required to solve a problem.
|
||||
|
||||
In addition, it takes time to make major network changes such as opening new locations, which could take up to 45 days. In the same report mentioned above, 71% of the respondents are frustrated with the telco customer-service-time to resolve the problems, 73% indicated that deploying new locations requires at least 15 days and 47% claimed that “high bandwidth costs” is the biggest frustration while working with telcos.
|
||||
|
||||
When it comes to lead times for projects, an engineer does not care. Does a project manager care if you have an optimum network design? No, many don’t, most just care about the timeframes. During my career, now spanning 18 years, I have never seen comments from any of my contacts saying “you must adhere to your project manager’s timelines”.
|
||||
|
||||
However, out of the experience, the project managers have their ways and lead times do become a big part of your daily job. So as an engineer, 45-day lead time will certainly hit your brand hard, especially if you are an external consultant.
|
||||
|
||||
There is also a problem with bandwidth costs. Telcos need to charge due to their complexity. There is always going to be a series of problems when working with them. Let’s face it, they offer an average service level.
|
||||
|
||||
### Co-management and self-service management
|
||||
|
||||
What is needed is a service that equips with the visibility and control of DIY to managed services. This, ultimately, opens the door to co-management and self-service management.
|
||||
|
||||
Co-management allows both the telco and enterprise to make changes to the WAN. Then we have the self-service management of WAN that allows the enterprises to have sole access over the aspect of their network.
|
||||
|
||||
However, these are just sticking plasters covering up the flaws. We need a managed service that not only connects locations but also synthesizes the site connectivity, along with security, mobile access, and cloud access.
|
||||
|
||||
### Introducing the cloud-native approach to SD-WAN
|
||||
|
||||
There should be a new style of managed services that combines the best of both worlds. It should offer the uptime, predictability and reach of the best telcos along with the cost structure and versatility of cloud providers. All such requirements can be met by what is known as the cloud-native carrier.
|
||||
|
||||
Therefore, we should be looking for a platform that can connect and secure all the users and resources at scale, no matter where they are positioned. Eventually, such a platform will limit the costs and increase the velocity and agility.
|
||||
|
||||
This is what a cloud-native carrier can offer you. You could say it’s a new kind of managed service, which is what enterprises are now looking for. A cloud-native carrier service brings the best of cloud services to the world of networking. This new style of managed service brings to SD-WAN the global reach, self-service, and agility of the cloud with the ability to easily migrate from MPLS.
|
||||
|
||||
In summary, a cloud-native carrier service will improve global connectivity to on-premises and cloud applications, enable secure branch to internet access, and both securely and optimally integrate cloud datacenters.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/03/network-wan-100713693-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
|
||||
[3]: https://www.catonetworks.com/news/digital-transformation-survey
|
||||
[4]: /contributor-network/signup.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Moving to the Cloud? SD-WAN Matters!)
|
||||
[#]: via: (https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
Moving to the Cloud? SD-WAN Matters!
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
This is the first in a two-part blog series that will explore how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The focus for this installment will be on automating secure IPsec connectivity and intelligently steering traffic to cloud providers.
|
||||
|
||||
Over the past several years we’ve seen a major shift in data center strategies where enterprise IT organizations are shifting applications and workloads to cloud, whether private or public. More and more, enterprises are leveraging software as-a-service (SaaS) applications and infrastructure as-a-service (IaaS) cloud services from leading providers like [Amazon AWS][3], [Google Cloud][4], [Microsoft Azure][5] and [Oracle Cloud Infrastructure][6]. This represents a dramatic shift in enterprise data traffic patterns as fewer and fewer applications are hosted within the walls of the traditional corporate data center.
|
||||
|
||||
There are several drivers for the shift to IaaS cloud services and SaaS apps, but business agility tops the list for most enterprises. The traditional IT model for provisioning and deprovisioning applications is rigid and inflexible and is no longer able to keep pace with changing business needs.
|
||||
|
||||
According to [LogicMonitor’s Cloud Vision 2020][7] study, more than 80 percent of enterprise workloads will run in the cloud by 2020 with more than 40 percent running on public cloud platforms. This major shift in the application consumption model is having a huge [impact on organizations and infrastructure][8]. A recent article entitled “[How Amazon Web Services is luring banks to the cloud][9],” published by CNBC, reported that some companies already have completely migrated all of their applications and IT workloads to public cloud infrastructures. An interesting fact is that while many enterprises must comply with stringent regulatory compliance mandates such as PCI-DSS or HIPAA, they still have made the move to the cloud. This tells us two things – the maturity of using public cloud services and the trust these organizations have in using them is at an all-time high. Again, it is all about speed and agility – without compromising performance, security and reliability.
|
||||
|
||||
### **Is there a direct correlation between moving to the cloud and adopting SD-WAN?**
|
||||
|
||||
As the cloud enables businesses to move faster, an SD-WAN architecture where top-down business intent is the driver is critical to ensuring success, especially when branch offices are geographically distributed across the globe. Traditional router-centric WAN architectures were never designed to support today’s cloud consumption model for applications in the most efficient way. With a conventional router-centric WAN approach, access to applications residing in the cloud means traversing unnecessary hops, resulting in wasted bandwidth, additional cost, added latency and potentially higher packet loss. In addition, under the existing, traditional WAN model where management tends to be rigid, complex network changes can be lengthy, whether setting up new branches or troubleshooting performance issues. This leads to inefficiencies and a costly operational model. Therefore, enterprises greatly benefit from taking a business-first WAN approach toward achieving greater agility in addition to realizing substantial CAPEX and OPEX savings.
|
||||
|
||||
A business-driven SD-WAN platform is purpose-built to tackle the challenges inherent to the traditional router-centric model and more aptly support today’s cloud consumption model. This means application policies are defined based on business intent, connecting users securely and directly to applications where ever they reside without unnecessary extra hops or security compromises. For example, if the application is hosted in the cloud and is trusted, a business-driven SD-WAN can automatically connect users to it without backhauling traffic to a POP or HQ data center. Now, in general this traffic is usually going across an internet link which, on its own, may not be secure. However, the right SD-WAN platform will have a unified stateful firewall built-in for local internet breakout allowing only branch-initiated sessions to enter the branch and providing the ability to service chain traffic to a cloud-based security service if necessary, before forwarding it to its final destination. If the application is moved and becomes hosted by another provider or perhaps back to a company’s own data center, traffic must be intelligently redirected, wherever the application is being hosted. Without automation and embedded machine learning, dynamic and intelligent traffic steering is impossible.
|
||||
|
||||
### **A closer look at how the Silver Peak EdgeConnect™ SD-WAN edge platform addresses these challenges: **
|
||||
|
||||
**Automate traffic steering and connectivity to cloud providers**
|
||||
|
||||
An [EdgeConnect][10] virtual instance is easily spun up in any of the [leading cloud providers][11] through their respective marketplaces. For an SD-WAN to intelligently steer traffic to its destination, it requires insights into both HTTP and HTTPS traffic; it must be able to identify apps on the first packet received in order to steer traffic to the right destination in accordance with business intent. This is critical capability because once a TCP connection is NAT’d with a public IP address, it cannot be switched thus it can’t be re-routed once a connection is established. So, the ability of EdgeConnect to identify, classify and automatically steer traffic based on the first packet – and not the second or tenth packet – to the correct destination will assure application SLAs, minimize wasting expensive bandwidth and deliver the highest quality of experience.
|
||||
|
||||
Another critical capability is automatic performance optimization. Irrespective of which link the traffic ends up traversing based on business intent and the unique requirements of the application, EdgeConnect automatically optimizes application performance without human intervention by correcting for out of order packets using Packet Order Correction (POC) or even under high latency conditions that can be related to distance or other issues. This is done using adaptive Forward Error Correction (FEC) and tunnel bonding where a virtual tunnel is created, resulting in a single logical overlay that traffic can be dynamically moved between the different paths as conditions change with each underlay WAN service. In this [lightboard video][12], Dinesh Fernando, a technical marketing engineer at Silver Peak, explains how EdgeConnect automates tunnel creation between sites and cloud providers, how it simplifies data transfers between multi-clouds, and how it improves application performance.
|
||||
|
||||
If your business is global and increasingly dependent on the cloud, the business-driven EdgeConnect SD-WAN edge platform enables seamless multi-cloud connectivity, turning the network into a business accelerant. EdgeConnect delivers:
|
||||
|
||||
1. A consistent deployment from the branch to the cloud, extending the reach of the SD-WAN into virtual private cloud environments
|
||||
2. Multi-cloud flexibility, making it easier to initiate and distribute resources across multiple cloud providers
|
||||
3. Investment protection by confidently migrating on premise IT resources to any combination of the leading public cloud platforms, knowing their cloud-hosted instances will be fully supported by EdgeConnect
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/istock-899678028-100797709-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.silver-peak.com/company/tech-partners/cloud/aws
|
||||
[4]: https://www.silver-peak.com/company/tech-partners/cloud/google-cloud
|
||||
[5]: https://www.silver-peak.com/company/tech-partners/cloud/microsoft-azure
|
||||
[6]: https://www.silver-peak.com/company/tech-partners/cloud/oracle-cloud
|
||||
[7]: https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/?utm_medium=pr&utm_source=businesswire&utm_campaign=cloudsurvey
|
||||
[8]: http://www.networkworld.com/article/3152024/lan-wan/in-the-age-of-digital-transformation-why-sd-wan-plays-a-key-role-in-the-transition.html
|
||||
[9]: http://www.cnbc.com/2016/11/30/how-amazon-web-services-is-luring-banks-to-the-cloud.html?__source=yahoo%257cfinance%257cheadline%257cheadline%257cstory&par=yahoo&doc=104135637
|
||||
[10]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[11]: https://www.silver-peak.com/company/tech-partners?strategic_partner_type=69
|
||||
[12]: https://www.silver-peak.com/resource-center/automate-connectivity-to-cloud-networking-with-sd-wan
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Satellite-based internet possible by year-end, says SpaceX)
|
||||
[#]: via: (https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Satellite-based internet possible by year-end, says SpaceX
|
||||
======
|
||||
Amazon, Tesla-associated SpaceX and OneWeb are emerging as just some of the potential suppliers of a new kind of data-friendly satellite internet service that could bring broadband IoT connectivity to most places on Earth.
|
||||
![Getty Images][1]
|
||||
|
||||
With SpaceX’s successful launch of an initial array of broadband-internet-carrying satellites last week, and Amazon’s surprising posting of numerous satellite engineering-related job openings on its [job board][2] this month, one might well be asking if the next-generation internet space race is finally getting going. (I first wrote about [OneWeb’s satellite internet plans][3] it was concocting with Airbus four years ago.)
|
||||
|
||||
This new batch of satellite-driven internet systems, if they work and are eventually switched on, could provide broadband to most places, including previously internet-barren locations, such as rural areas. That would be good for high-bandwidth, low-latency remote-internet of things (IoT) and increasingly important edge-server connections for verticals like oil and gas and maritime. [Data could even end up getting stored in compliance-friendly outer space, too][4]. Leaky ground-based connections, also, perhaps a thing of the past.
|
||||
|
||||
Of the principal new internet suppliers, SpaceX has gotten farthest along. That’s in part because it has commercial impetus. It needed to create payload for its numerous rocket projects. The Tesla electric-car-associated company (the two firms share materials science) has not only launched its first tranche of 60 satellites for its own internet constellation, called Starlink, but also successfully launched numerous batches (making up the full constellation of 75 satellites) for Iridium’s replacement, an upgraded constellation called Iridium NEXT.
|
||||
|
||||
[The time of 5G is almost here][5]
|
||||
|
||||
Potential competitor OneWeb launched its first six Airbus-built satellites in February. [It has plans for 900 more][6]. SpaceX has been approved for 4,365 more by the FCC, and Project Kuiper, as Amazon’s space internet project is known, wants to place 3,236 satellites in orbit, according to International Telecommunication Union filings [discovered by _GeekWire_][7] earlier this year. [Startup LeoSat, which I wrote about last year, aims to build an internet backbone constellation][8]. Facebook, too, is exploring [space-delivered internet][9].
|
||||
|
||||
### Why the move to space?
|
||||
|
||||
Laser technical progress, where data is sent in open, free space, rather than via a restrictive, land-based cable or via traditional radio paths, is partly behind this space-internet rush. “Bits travel faster in free space than in glass-fiber cable,” LeoSat explained last year. Additionally, improving microprocessor tech is also part of the mix.
|
||||
|
||||
One important difference from existing older-generation satellite constellations is that this new generation of internet satellites will be located in low Earth orbit (LEO). Initial Starlink satellites will be placed at about 350 miles above Earth, with later launches deployed at 710 miles.
|
||||
|
||||
There’s an advantage to that. Traditional satellites in geostationary orbit, or GSO, have been deployed about 22,000 miles up. That extra distance versus LEO introduces latency and is one reason earlier generations of Internet satellites are plagued by slow round-trip times. Latency didn’t matter when GSO was introduced in 1964, and commercial satellites, traditionally, have been pitched as one-way video links, such as are used by sporting events for broadcast, and not for data.
|
||||
|
||||
And when will we get to experience these new ISPs? “Starlink is targeted to offer service in the Northern U.S. and Canadian latitudes after six launches,” [SpaceX says on its website][10]. Each launch would deliver about 60 satellites. “SpaceX is targeting two to six launches by the end of this year.”
|
||||
|
||||
Global penetration of the “populated world” could be obtained after 24 launches, it thinks.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/network_iot_world-map_us_globe_nodes_global-100777483-large.jpg
|
||||
[2]: https://www.amazon.jobs/en/teams/projectkuiper
|
||||
[3]: https://www.itworld.com/article/2938652/space-based-internet-starts-to-get-serious.html
|
||||
[4]: https://www.networkworld.com/article/3200242/data-should-be-stored-data-in-space-firm-says.html
|
||||
[5]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[6]: https://www.airbus.com/space/telecommunications-satellites/oneweb-satellites-connection-for-people-all-over-the-globe.html
|
||||
[7]: https://www.geekwire.com/2019/amazon-lists-scores-jobs-bellevue-project-kuiper-broadband-satellite-operation/
|
||||
[8]: https://www.networkworld.com/article/3328645/space-data-backbone-gets-us-approval.html
|
||||
[9]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
|
||||
[10]: https://www.starlink.com/
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Survey finds SD-WANs are hot, but satisfaction with telcos is not)
|
||||
[#]: via: (https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
Survey finds SD-WANs are hot, but satisfaction with telcos is not
|
||||
======
|
||||
A recent survey of over 400 IT executives by Cato Networks found that legacy telcos might be on the outside looking in for SD-WANs.
|
||||
![istock][1]
|
||||
|
||||
This week SD-WAN vendor Cato Networks announced the results of its [Telcos and the Future of the WAN in 2019 survey][2]. The study was a mix of companies of all sizes, with 42% being enterprise-class (over 2,500 employees). More than 70% had a network with more than 10 locations, and almost a quarter (24%) had over 100 sites. All of the respondents have a cloud presence, and almost 80% have at least two data centers. The survey had good geographic diversity, with 57% of respondents coming from the U.S. and 24% from Europe.
|
||||
|
||||
Highlights of the survey include the following key findings:
|
||||
|
||||
## **SD-WANs are hot but not a panacea to all networking challenges**
|
||||
|
||||
The survey found that 44% of respondents have already deployed or will deploy an SD-WAN within the next 12 months. This number is up sharply from 25% when Cato ran the survey a year ago. Another 33% are considering SD-WAN but have no immediate plans to deploy. The primary drivers for the evolution of the WAN are improved internet access (46%), increased bandwidth (39%), improved last-mile availability (38%) and reduced WAN costs (37%). It’s good to see cost savings drop to fourth in motivation, since there is so much more to SD-WAN.
|
||||
|
||||
[The time of 5G is almost here][3]
|
||||
|
||||
It’s interesting that the majority of respondents believe SD-WAN alone can’t address all challenges facing the WAN. A whopping 85% stated they would be confronting issues not addressed by SD-WAN alone. This includes secure, local internet breakout, improved visibility, and control over mobile access to cloud apps. This indicates that customers are looking for SD-WAN to be the foundation of the WAN but understand that other technologies need to be deployed as well.
|
||||
|
||||
## **Telco dissatisfaction is high**
|
||||
|
||||
The traditional telco has been a point of frustration for network professionals for years, and the survey spelled that out loud and clear. Prior to being an analyst, I held a number of corporate IT positions and found telcos to be the single most frustrating group of companies to deal with. The problem was, there was no choice. If you need MPLS services, you need a telco. The same can’t be said for SD-WANs, though; businesses have more choices.
|
||||
|
||||
Respondents to the survey ranked telco service as “average.” It’s been well documented that we are now in the customer-experience era and “good enough” service is no longer good enough. Regarding pricing, 54% gave telcos a failing grade. Although price isn’t everything, this will certainly open the door to competitive SD-WAN vendors. Respondents gave the highest marks for overall experience to SaaS providers, followed by cloud computing suppliers. Global telcos scored the lowest of all vendor types.
|
||||
|
||||
A look deeper explains the frustration level. The network is now mission-critical for companies, but 48% stated they are able to reach the support personnel with the right expertise to solve a problem only on a second attempt. No retailer, airline, hotel or other type of company could survive this, but telco customers had no other options for years.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
|
||||
|
||||
Another interesting set of data points is the speed at which telcos address customer needs. Digital businesses compete on speed, but telco process is the antithesis of fast. Moves, adds and changes take at least one business day for half of the respondents. Also, 70% indicated that opening a new location takes 15 days, and 38% stated it requires 45 days or more.
|
||||
|
||||
## **Security is now part of SD-WAN**
|
||||
|
||||
The use of broadband, cloud access and other trends raise the bar on security for SD-WAN, and the survey confirmed that respondents are skeptical that SD-WANs could address these issues. Seventy percent believe SD-WANs can’t address malware/ransomware, and 49% don’t think SD-WAN helps with enforcing company policies on mobile users. Because of this, network professionals are forced to buy additional security tools from other vendors, but that can drive up complexity. SD-WAN vendors that have intrinsic security capabilities can use that as a point of differentiation.
|
||||
|
||||
## **Managed services are critical to the growth of SD-WANs**
|
||||
|
||||
The survey found that 75% of respondents are using some kind of managed service provider, versus only 25% using an appliance vendor. This latter number was 32% last year. I’m not surprised by this shift and expect it to continue. Legacy WANs were inefficient but straightforward to deploy. D-WANs are highly agile and more cost-effective, but complexity has gone through the roof. Network engineers need to factor in cloud connectivity, distributed security, application performance, broadband connectivity and other issues. Managed services can help businesses enjoy the benefits of SD-WAN while masking the complexity.
|
||||
|
||||
Despite the desire to use an MSP, respondents don’t want to give up total control. Eighty percent stated they preferred self-service or co-managed models. This further explains the shift away from telcos, since they typically work with fully managed models.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/istock-465661573-100750447-large.jpg
|
||||
[2]: https://www.catonetworks.com/news/digital-transformation-survey/
|
||||
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,28 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric)
|
||||
[#]: via: (https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html)
|
||||
[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric
|
||||
======
|
||||
|
||||
Many hyperconverged solutions only focus on software-defined storage. However, many networking functions and technologies can be consolidated for simplicity and scale in the data center. This video describes how HPE SimpliVity with Composable Fabric gives organizations the power to run any virtual machine anywhere, anytime. Read more about HPE SimpliVity [here][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html
|
||||
|
||||
作者:[HPE][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://hpe.com/info/simplivity
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT Roundup: New research on IoT security, Microsoft leans into IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
IoT Roundup: New research on IoT security, Microsoft leans into IoT
|
||||
======
|
||||
Verizon sets up widely available narrow-band IoT service, while most Americans think IoT manufacturers should ensure their products protect personal information.
|
||||
As with any technology whose use is expanding at such speed, it can be tough to track exactly what’s going on in the [IoT][1] world – everything from basic usage numbers to customer attitudes to more in-depth slices of the market is constantly changing. Fortunately, the month of May brought several new pieces of research to light, which should help provide at least a partial outline of what’s really happening in IoT.
|
||||
|
||||
### Internet of things polls
|
||||
|
||||
Not all of the news is good. An IPSOS Mori poll performed on behalf of the Internet Society and Consumers International (respectively, an umbrella organization for open development and Internet use and a broad-based consumer advocacy group) found that, despite the skyrocketing numbers of smart devices in circulation around the world, more than half of users in large parts of the western world don’t trust those devices to safeguard their privacy.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][2]
|
||||
* [What is edge computing and how it’s changing the network][3]
|
||||
* [Most powerful Internet of Things companies][4]
|
||||
* [10 Hot IoT startups to watch][5]
|
||||
* [The 6 ways to make money in IoT][6]
|
||||
* [What is digital twin technology? [and why it matters]][7]
|
||||
* [Blockchain, service-centric networking key to IoT success][8]
|
||||
* [Getting grounded in IoT networking and security][9]
|
||||
* [Building IoT-ready networks must become a priority][10]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][11]
|
||||
|
||||
|
||||
|
||||
While almost 70 percent of respondents owned connected devices, 55 percent said they didn’t feel their personal information was adequately protected by manufacturers. A further 28 percent said they had avoided using connected devices – smart home, fitness tracking and similar consumer gadgetry – primarily because they were concerned over privacy issues, and a whopping 85 percent of Americans agreed with the argument that manufacturers had a responsibility to produce devices that protected personal information.
|
||||
|
||||
Those concerns are understandable, according to data from the Ponemon Institute, a tech-research organization. Its survey of corporate risk and security personnel, released in early May, found that there have been few concerted efforts to limit exposure to IoT-based security threats, and that those threats are sharply on the rise when compared to past years, with the percentage of organizations that had experienced a data breach related to unsecured IoT devices rising from 15 percent in fiscal 2017 to 26 percent in fiscal 2019.
|
||||
|
||||
Beyond a lack of organizational wherewithal to address those threats, part of the problem in some verticals is technical. Security vendor Forescout said earlier this month that its research showed 40 percent of all healthcare IT environments had more than 20 different operating systems, and more than 30 percent had more than 100 – hardly an ideal situation for smooth patching and updating.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][12]
|
||||
|
||||
[Learn More][13] Existing Users [Sign In][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[12]: javascript://
|
||||
[13]: /learn-about-insider/
|
@ -1,102 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (It’s time for the IoT to 'optimize for trust')
|
||||
[#]: via: (https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
It’s time for the IoT to 'optimize for trust'
|
||||
======
|
||||
If we can't trust the internet of things (IoT) to gather accurate data and use it appropriately, IoT adoption and innovation are likely to suffer.
|
||||
![Bose][1]
|
||||
|
||||
One of the strengths of internet of things (IoT) technology is that it can do so many things well. From smart toothbrushes to predictive maintenance on jetliners, the IoT has more use cases than you can count. The result is that various IoT uses cases require optimization for particular characteristics, from cost to speed to long life, as well as myriad others.
|
||||
|
||||
But in a recent post, "[How the internet of things will change advertising][2]" (which you should definitely read), the always-insightful Stacy Higginbotham tossed in a line that I can’t stop thinking about: “It's crucial that the IoT optimizes for trust."
|
||||
|
||||
**[ Read also: Network World's[corporate guide to addressing IoT security][3] ]**
|
||||
|
||||
### Trust is the IoT's most important attribute
|
||||
|
||||
Higginbotham was talking about optimizing for trust as opposed to clicks, but really, trust is more important than just about any other value in the IoT. It’s more important than bandwidth usage, more important than power usage, more important than cost, more important than reliability, and even more important than security and privacy (though they are obviously related). In fact, trust is the critical factor in almost every aspect of the IoT.
|
||||
|
||||
Don’t believe me? Let’s take a quick look at some recent developments in the field:
|
||||
|
||||
For one thing, IoT devices often don’t take good care of the data they collect from you. Over 90% of data transactions on IoT devices are not fully encrypted, according to a new [study from security company Zscaler][4]. The [problem][5], apparently, is that many companies have large numbers of consumer-grade IoT devices on their networks. In addition, many IoT devices are attached to the companies’ general networks, and if that network is breached, the IoT devices and data may also be compromised.
|
||||
|
||||
In some cases, ownership of IoT data can raise surprisingly serious trust concerns. According to [Kaiser Health News][6], smartphone sleep apps, as well as smart beds and smart mattress pads, gather amazingly personal information: “It knows when you go to sleep. It knows when you toss and turn. It may even be able to tell when you’re having sex.” And while companies such as Sleep Number say they don’t share the data they gather, their written privacy policies clearly state that they _can_.
|
||||
|
||||
### **Lack of trust may lead to new laws**
|
||||
|
||||
In California, meanwhile, "lawmakers are pushing for new privacy rules affecting smart speakers” such as the Amazon Echo. According to the _[LA Times][7]_ , the idea is “to ensure that the devices don’t record private conversations without permission,” requiring a specific opt-in process. Why is this an issue? Because consumers—and their elected representatives—don’t trust that Amazon, or any IoT vendor, will do the right thing with the data it collects from the IoT devices it sells—perhaps because it turns out that thousands of [Amazon employees have been listening in on what Alexa users are][8] saying to their Echo devices.
|
||||
|
||||
The trust issues get even trickier when you consider that Amazon reportedly considered letting Alexa listen to users even without a wake word like “Alexa” or “computer,” and is reportedly working on [wearable devices designed to read human emotions][9] from listening to your voice.
|
||||
|
||||
“The trust has been breached,” said California Assemblyman Jordan Cunningham (R-Templeton) to the _LA Times_.
|
||||
|
||||
As critics of the bill ([AB 1395][10]) point out, the restrictions matter because voice assistants require this data to improve their ability to correctly understand and respond to requests.
|
||||
|
||||
### **Some first steps toward increasing trust**
|
||||
|
||||
Perhaps recognizing that the IoT needs to be optimized for trust so that we are comfortable letting it do its job, Amazon recently introduced a new Alexa voice command: “[Delete what I said today][11].”
|
||||
|
||||
Moves like that, while welcome, will likely not be enough.
|
||||
|
||||
For example, a [new United Nations report][12] suggests that “voice assistants reinforce harmful gender stereotypes” when using female-sounding voices and names like Alexa and Siri. Put simply, “Siri’s ‘female’ obsequiousness—and the servility expressed by so many other digital assistants projected as young women—provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.” I'm not sure IoT vendors are eager—or equipped—to tackle issues like that.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][13]
|
||||
* [What is edge computing and how it’s changing the network][14]
|
||||
* [Most powerful Internet of Things companies][15]
|
||||
* [10 Hot IoT startups to watch][16]
|
||||
* [The 6 ways to make money in IoT][17]
|
||||
* [What is digital twin technology? [and why it matters]][18]
|
||||
* [Blockchain, service-centric networking key to IoT success][19]
|
||||
* [Getting grounded in IoT networking and security][20]
|
||||
* [Building IoT-ready networks must become a priority][21]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][22]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][23] and [LinkedIn][24] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/09/bose-sleepbuds-2-100771579-large.jpg
|
||||
[2]: https://mailchi.mp/iotpodcast/stacey-on-iot-how-iot-changes-advertising?e=6bf9beb394
|
||||
[3]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[4]: https://www.zscaler.com/blogs/research/iot-traffic-enterprise-rising-so-are-threats
|
||||
[5]: https://www.csoonline.com/article/3397044/over-90-of-data-transactions-on-iot-devices-are-unencrypted.html
|
||||
[6]: https://khn.org/news/a-wake-up-call-on-data-collecting-smart-beds-and-sleep-apps/
|
||||
[7]: https://www.latimes.com/politics/la-pol-ca-alexa-google-home-privacy-rules-california-20190528-story.html
|
||||
[8]: https://www.usatoday.com/story/tech/2019/04/11/amazon-employees-listening-alexa-customers/3434732002/
|
||||
[9]: https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions
|
||||
[10]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1395
|
||||
[11]: https://venturebeat.com/2019/05/29/amazon-launches-alexa-delete-what-i-said-today-voice-command/
|
||||
[12]: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1
|
||||
[13]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[14]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[15]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[16]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[17]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[18]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[19]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[20]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[21]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[22]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[23]: https://www.facebook.com/NetworkWorld/
|
||||
[24]: https://www.linkedin.com/company/network-world
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data center workloads become more complex despite promises to the contrary)
|
||||
[#]: via: (https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Data center workloads become more complex despite promises to the contrary
|
||||
======
|
||||
The data center is shouldering a greater burden than ever, despite promises of ease and the cloud.
|
||||
![gorodenkoff / Getty Images][1]
|
||||
|
||||
Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads.
|
||||
|
||||
That’s the finding of the Uptime Institute's latest [annual global data center survey][2] (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure.
|
||||
|
||||
**[ Learn[how server disaggregation can boost data center efficiency][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand.
|
||||
|
||||
On top of it all, everyone is struggling to get enough staff with the right skills.
|
||||
|
||||
### Outages, staffing problems, lack of public cloud visibility among top concerns
|
||||
|
||||
Among the key findings of Uptime's report:
|
||||
|
||||
* The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021.
|
||||
* The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier.
|
||||
* Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.
|
||||
* Ten percent of all respondents said their most recent significant outage cost more than $1 million.
|
||||
* A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility.
|
||||
* Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67.
|
||||
* Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies.
|
||||
* Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data center’s outage could have been prevented with better management/processes or configuration.
|
||||
|
||||
|
||||
|
||||
Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method.
|
||||
|
||||
The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
|
||||
[2]: https://uptimeinstitute.com/2019-data-center-industry-survey-results
|
||||
[3]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,66 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Moving to the Cloud? SD-WAN Matters! Part 2)
|
||||
[#]: via: (https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
Moving to the Cloud? SD-WAN Matters! Part 2
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
This is the second installment of the blog series exploring how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The first installment explored automating secure IPsec connectivity and intelligently steering traffic to cloud providers. We also framed the direct correlation between moving to the cloud and adopting an SD-WAN. In this blog, we will expand upon several additional challenges that can be addressed with a business-driven SD-WAN when embracing the cloud:
|
||||
|
||||
### Simplifying and automating security zone-based segmentation
|
||||
|
||||
Securing cloud-first branches requires a robust multi-level approach that addresses following considerations:
|
||||
|
||||
* Restricting outside traffic coming into the branch to sessions exclusively initiated by internal users with a built-in stateful firewall, avoiding appliance sprawl and lowering operational costs; this is referred to as the app whitelist model
|
||||
* Encrypting communications between end points within the SD-WAN fabric and between branch locations and public cloud instances
|
||||
* Service chaining traffic to a cloud-hosted security service like [Zscaler][3] for Layer 7 inspection and analytics for internet-bound traffic
|
||||
* Segmenting traffic spanning the branch, WAN and data center/cloud
|
||||
* Centralizing policy orchestration and automation of zone-based firewall, VLAN and WAN overlays
|
||||
|
||||
|
||||
|
||||
A traditional device-centric WAN approach for security segmentation requires the time-consuming manual configuration of routers and/or firewalls on a device-by-device and site-by-site basis. This is not only complex and cumbersome, but it simply can’t scale to 100s or 1000s of sites. Anusha Vaidyanathan, director of product management at Silver Peak, explains how to automate end-to-end zone-based segmentation, emphasizing the advantages of a business-driven approach in this [lightboard video][4].
|
||||
|
||||
### Delivering the Highest Quality of Experience to IT teams
|
||||
|
||||
The goal for enterprise IT is enabling business agility and increasing operational efficiency. The traditional router-centric WAN approach doesn’t provide the best quality of experience for IT as management and on-going network operations are manual and time consuming, device-centric, cumbersome, error-prone and inefficient.
|
||||
|
||||
A business-driven SD-WAN such as the Silver Peak [Unity EdgeConnect™][5] unified SD-WAN edge platform centralizes the orchestration of business-driven policies. EdgeConnect automation, machine learning and open APIs easily integrate with third-party management tools and real-time visibility tools to deliver the highest quality of experience for IT, enabling them to reclaim nights and weekends. Manav Mishra, vice president of product management at Silver Peak, explains the latest Silver Peak innovations in this [lightboard video][6].
|
||||
|
||||
As enterprises become increasingly dependent on the cloud and embrace a multi-cloud strategy, they must address a number of new challenges:
|
||||
|
||||
* A centralized approach to securely embracing the cloud and the internet
|
||||
* How to extend the on-premise data center to a public cloud and migrating workloads between private and public cloud, taking application portability into account
|
||||
* Deliver consistent high application performance and availability to hosted applications whether they reside in the data center, private or public clouds or are delivered as SaaS services
|
||||
* A proactive way to quickly resolve complex issues that span the data center and cloud as well as multiple WAN transport services by harnessing the power of advanced visibility and analytics tools
|
||||
|
||||
|
||||
|
||||
The business-driven EdgeConnect SD-WAN edge platform enables enterprise IT organizations to easily and consistently embrace the public cloud. Unified security and performance capabilities with automation deliver the highest quality of experience for both users and IT while lowering overall WAN expenditures.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/istock-909772962-100797711-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.silver-peak.com/company/tech-partners/zscaler
|
||||
[4]: https://www.silver-peak.com/resource-center/how-to-create-sd-wan-security-zones-in-edgeconnect
|
||||
[5]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[6]: https://www.silver-peak.com/resource-center/how-to-optimize-quality-of-experience-for-it-using-sd-wan
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why Emacs)
|
||||
[#]: via: (https://saurabhkukade.github.io/Why-Emacs/)
|
||||
[#]: author: (Saurabh Kukade http://saurabhkukade.github.io/)
|
||||
|
||||
Why Emacs
|
||||
======
|
||||
![Image of Emacs][1]
|
||||
|
||||
> “Emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish.”
|
||||
|
||||
> -Neal Stephenson, “In the Beginning was the Command Line”
|
||||
|
||||
### Introduction
|
||||
|
||||
This is my first blog post about Emacs. I want to discuss step by step customization of Emacs for beginner. If you’re new to Emacs then you are in the right place, if you’re already familiar with Emacs then that is even better, I assure you that we will get to know many new things in here.
|
||||
|
||||
Before getting into how to customize Emacs and what are the exciting features of Emacs I want to write about “why Emacs”.
|
||||
|
||||
### Why Emacs?
|
||||
|
||||
This was first question crossed my mind when one wise man asked me to try Emacs instead of VIM. Well, I am not writing this article to discuss a battle between two editors VIM and Emacs. That is a another story for another day. But Why Emacs? Well here are some things that justifies that Emacs is powerful and highly customizable.
|
||||
|
||||
### 41 Years!
|
||||
|
||||
Initial release year of Emacs is 1976 that means Emacs is standing and adapting changes from last 41 years.
|
||||
|
||||
41 years of time for a software is huge and that makes Emacs is one of the best Software Engineering product.
|
||||
|
||||
### Lisp (Emacs Lisp)
|
||||
|
||||
If you are lisp programmer (lisper) then I don’t need to explain you. But for those who don’t know Lisp and its dialects like Scheme, Clojure then Lisp (and all dialects of Lips) is powerful programming language and it stands different from other languages because of its unique property of “Homoiconicity”.
|
||||
|
||||
As Emacs is implemented in C and Emacs Lisp (Emacs Lisp is a dialect of the Lisp programming language) it makes Emacs what is because,
|
||||
|
||||
* The simple syntax of Lisp, together with the powerful editing features made possible by that simple syntax, add up to a more convenient programming system than is practical with other languages. Lisp and extensible editors are made for each other.
|
||||
|
||||
* The simplicity of Lisp syntax makes intelligent editing operations easier to implement, while the complexity of other languages discourages their users from implementing similar operations for them.
|
||||
|
||||
|
||||
|
||||
|
||||
### Highly Customizable
|
||||
|
||||
To any programmer, tools gives power and convenience for reading, writing and managing a code.
|
||||
|
||||
Hence, if a tool is programmatic-ally customizable then that makes it even more powerful.
|
||||
|
||||
Emacs has above property and in fact is itself one of best tool known for its flexibility and easy customization. Emacs provides basic commands and key configuration for editing a text. This commands and key-configuration are editable and extensible.
|
||||
|
||||
Beside basic configuration, Emacs is not biased towards any specific language for customization. One can customize Emacs for any programming language or extend easily existing customization.
|
||||
|
||||
Emacs provides the consistent environment for multiple programming languages, email, organizer (via org-mode), a shell/interpreter, note taking, and document writing.
|
||||
|
||||
For customizing you don’t need to learn Emacs-lisp from scratch. You can use existing packages available and that’s it. Installing and managing packages in Emacs is easy, Emacs has in-built package manager for it.
|
||||
|
||||
Customization is very portable, one just need to place a file or directory containing personal customization file(s) in the right place and it’s done for getting personal customization to new place. ## Huge platform Support
|
||||
|
||||
Emacs supports Lisp, Ruby, Python, PHP, Java, Erlang, JavaScript, C, C++, Prolog, Tcl, AWK, PostScript, Clojure, Scala, Perl, Haskell, Elixir all of these languages and more like mysql, pgsql etc. Because of the powerful Lisp core, Emacs is easy to extend to add support for new languages if need to.
|
||||
|
||||
Also one can use the built-in IRC client ERC along with BitlBee to connect to your favorite chat services, or use the Jabber package to hop on any XMPP service.
|
||||
|
||||
### Org-mode
|
||||
|
||||
No matter if you are programmer or not. Org mode is for everyone. Org mode lets you to plan projects and organize schedule. It can be also use for publish notes and documents to different formats, like LaTeX->pdf, html, and markdown.
|
||||
|
||||
In fact, Org-mode is so awesome enough that many non-Emacs users started learn Emacs.
|
||||
|
||||
### Final note
|
||||
|
||||
There are number of reason to argue that Emacs is cool and awesome to use. But I just wanted you to give glimpse of why to try Emacs. In the upcoming post I will be writing step by step information to customize Emacs from scratch to awesome IDE.
|
||||
|
||||
Thank you!
|
||||
|
||||
Please don’t forget to comment your thoughts and suggestions below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://saurabhkukade.github.io/Why-Emacs/
|
||||
|
||||
作者:[Saurabh Kukade][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://saurabhkukade.github.io/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://saurabhkukade.github.io/img/emacs.jpeg
|
@ -1,67 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cloud adoption drives the evolution of application delivery controllers)
|
||||
[#]: via: (https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
Cloud adoption drives the evolution of application delivery controllers
|
||||
======
|
||||
Application delivery controllers (ADCs) are on the precipice of shifting from traditional hardware appliances to software form factors.
|
||||
![Aramyan / Getty Images / Microsoft][1]
|
||||
|
||||
Migrating to a cloud computing model will obviously have an impact on the infrastructure that’s deployed. This shift has already been seen in the areas of servers, storage, and networking, as those technologies have evolved to a “software-defined” model. And it appears that application delivery controllers (ADCs) are on the precipice of a similar shift.
|
||||
|
||||
In fact, a new ZK Research [study about cloud computing adoption and the impact on ADCs][2] found that, when looking at the deployment model, hardware appliances are the most widely deployed — with 55% having fully deployed or are currently testing and only 15% currently researching hardware. (Note: I am an employee of ZK Research.)
|
||||
|
||||
Juxtapose this with containerized ADCs where only 34% have deployed or are testing but 24% are currently researching and it shows that software in containers will outpace hardware for growth. Not surprisingly, software on bare metal and in virtual machines showed similar although lower, “researching” numbers that support the thesis that the market is undergoing a shift from hardware to software.
|
||||
|
||||
**[ Read also:[How to make hybrid cloud work][3] ]**
|
||||
|
||||
The study, conducted in collaboration with Kemp Technologies, surveyed 203 respondents from the U.K. and U.S. The demographic split was done to understand regional differences. An equal number of mid and large size enterprises were looked at, with 44% being from over 5,000 employees and the other 56% from companies that have 300 to 5,000 people.
|
||||
|
||||
### Incumbency helps but isn’t a fait accompli for future ADC purchases
|
||||
|
||||
The primary tenet of my research has always been that incumbents are threatened when markets transition, and this is something I wanted to investigate in the study. The survey asked whether buyers would consider an alternative as they evolve their applications from legacy (mode 1) to cloud-native (mode 2). The results offer a bit of good news and bad news for the incumbent providers. Only 8% said they would definitely select a new vendor, but 35% said they would not change. That means the other 57% will look at alternatives. This is sensible, as the requirements for cloud ADCs are different than ones that support traditional applications.
|
||||
|
||||
### IT pros want better automation capabilities
|
||||
|
||||
This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility.
|
||||
|
||||
The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT can’t operate effectively without automation, and technology professionals are starting to understand that.
|
||||
|
||||
The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors.
|
||||
|
||||
![][4]
|
||||
|
||||
### ADC upgrades and service provisioning benefit most from automation
|
||||
|
||||
The survey also drilled down on specific ADC tasks to see where automation would have the most impact. Respondents were asked how long certain tasks took, answering in minutes, days, weeks, or months. Shockingly, there wasn’t a single task where the majority said it could be done in minutes. The closest was adding DNS entries for new virtual IP addresses (VIPs) where 46% said they could do that in minutes.
|
||||
|
||||
Upgrading, provisioning new load balancers, and provisioning new VIPs took the longest. Looking ahead, this foreshadows big problems. As the data center gets more disaggregated and distributed, IT will deploy more software-based ADCs in more places. Taking days or weeks or month to perform these functions will cause the organization to fall behind.
|
||||
|
||||
The study clearly shows changes are in the air for the ADC market. For IT pros, I strongly recommend that as the environment shifts to the cloud, it’s prudent to evaluate new vendors. By all means, see what your incumbent vendor has, but look at least at two others that offer software-based solutions. Also, there should be a focus on automating as much as possible, so the primary evaluation criteria for ADCs should be how easy it is to implement automation.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/cw_microsoft_sharepoint_vs_onedrive_clouds_and_hands_by_aramyan_gettyimages-909772962_2400x1600-100796932-large.jpg
|
||||
[2]: https://kemptechnologies.com/research-papers/adc-market-research-study-zeus-kerravala/?utm_source=zkresearch&utm_medium=referral&utm_campaign=zkresearch&utm_term=zkresearch&utm_content=zkresearch
|
||||
[3]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
|
||||
[4]: https://images.idgesg.net/images/article/2019/06/adc-survey-zk-research-100798593-large.jpg
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,118 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (For enterprise storage, persistent memory is here to stay)
|
||||
[#]: via: (https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html)
|
||||
[#]: author: (John Edwards )
|
||||
|
||||
For enterprise storage, persistent memory is here to stay
|
||||
======
|
||||
Persistent memory – also known as storage class memory – has tantalized data center operators for many years. A new technology promises the key to success.
|
||||
![Thinkstock][1]
|
||||
|
||||
It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious [data center][2] operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.
|
||||
|
||||
High-capacity persistent memory, also known as storage class memory ([SCM][3]), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as [hard disk drives][4] (HDD) and [solid-state drives][5] (SSD).
|
||||
|
||||
**Learn more about enterprise storage**
|
||||
|
||||
* [Why NVMe over Fabric matters][6]
|
||||
* [What is hyperconvergence?][7]
|
||||
* [How NVMe is changing enterprise storage][8]
|
||||
* [Making the right hyperconvergence choice: HCI hardware or software?][9]
|
||||
|
||||
|
||||
|
||||
Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations.
|
||||
|
||||
Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications.
|
||||
|
||||
In a sense, persistent memory marks a rebirth of core memory. "Computers in the ‘50s to ‘70s used magnetic core memory, which was direct access, non-volatile memory," says Doug Wong, a senior member of [Toshiba Memory America's][10] technical staff. "Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories."
|
||||
|
||||
One of the first persistent memory devices to come to market is [Intel’s Optane DC][11]. Other vendors that have released persistent memory products or are planning to do so include [Samsung][12], Toshiba America Memory and [SK Hynix][13].
|
||||
|
||||
### Persistent memory: performance + reliability
|
||||
|
||||
With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. "It's faster than regular solid-state NAND flash-type storage, but you're also getting the benefit that it’s persistent," says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm [StorageIO.][14] "It's the best of both worlds."
|
||||
|
||||
Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. "This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails," observes Scott Nelson, senior vice president and general manager of Toshiba Memory America's memory business unit.
|
||||
|
||||
Persistent memory transports compatible applications to an "always on" status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, [Intel's][15] director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on [Apache Spark][16], a widely used open-source distributed general-purpose cluster-computing framework.
|
||||
|
||||
System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. "This value will dramatically change memory/storage economics and accelerate the data-centric era," she predicts.
|
||||
|
||||
### Where will persistent memory infiltrate enterprise storage?
|
||||
|
||||
Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. "This could be adopted relatively-quickly," Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications.
|
||||
|
||||
Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. "Anywhere time is money," Schulz says. "It could be financial services, but it could also be consumer-facing or sales-facing operations."
|
||||
|
||||
Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at [Aerospike][17], which delivered the first commercially available open database optimized for use with Intel Optane DC.
|
||||
|
||||
Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers' reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. "They’re collecting information as users within an ad campaign browse the web," he says. "If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time."
|
||||
|
||||
Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. "In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they're using self-training machine-learning algorithms to help with that," Gooding explains. "There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy... the list goes on."
|
||||
|
||||
Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and [Redis Labs][18], Mann says.
|
||||
|
||||
High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. [CERN][19], the European Organization for Nuclear Research, is using Intel's Optane DC to significantly reduce wait times for scientific computing. "The efficiency of their algorithms depends on ... persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing," Mann observes.
|
||||
|
||||
### How to prepare storage infrastructure for persistent memory
|
||||
|
||||
Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. "If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance," he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck.
|
||||
|
||||
Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. "They will need to know how to program with persistent memory," he notes. "How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache."
|
||||
|
||||
To leverage all of persistent memory's potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. "Right now, if they write code that leaks memory, that leaked memory is recovered on restart," Gooding explains. With persistent memory, that isn't necessarily the case. "Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts," he notes. "You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory."
|
||||
|
||||
Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. "In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory."
|
||||
|
||||
Hypervisors, such as [Hyper-V][20] and [VMware][21], now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator [PhoenixNAP Global IT Services][22]. "This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments."
|
||||
|
||||
Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and [SAP’s][23] flagship [HANA][24] database management platform have both embraced persistent memory. "The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics ... to deliver real-time insights," Mann observes.
|
||||
|
||||
In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. "You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory."
|
||||
|
||||
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html
|
||||
|
||||
作者:[John Edwards][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/file_folder_storage_sharing_thinkstock_477492571_3x2-100732889-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3353637/the-data-center-is-being-reimagined-not-disappearing.html
|
||||
[3]: https://www.networkworld.com/article/3026720/the-next-generation-of-storage-disruption-storage-class-memory.html
|
||||
[4]: https://www.networkworld.com/article/2159948/hard-disk-drives-vs--solid-state-drives--are-ssds-finally-worth-the-money-.html
|
||||
[5]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
|
||||
[6]: https://www.networkworld.com/article/3273583/why-nvme-over-fabric-matters.html
|
||||
[7]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence
|
||||
[8]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
|
||||
[9]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software
|
||||
[10]: https://business.toshiba-memory.com/en-us/top.html
|
||||
[11]: https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html
|
||||
[12]: https://www.samsung.com/semiconductor/
|
||||
[13]: https://www.skhynix.com/eng/index.jsp
|
||||
[14]: https://storageio.com/
|
||||
[15]: https://www.intel.com/content/www/us/en/homepage.html
|
||||
[16]: https://spark.apache.org/
|
||||
[17]: https://www.aerospike.com/
|
||||
[18]: https://redislabs.com/
|
||||
[19]: https://home.cern/
|
||||
[20]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/
|
||||
[21]: https://www.vmware.com/
|
||||
[22]: https://phoenixnap.com/
|
||||
[23]: https://www.sap.com/index.html
|
||||
[24]: https://www.sap.com/products/hana.html
|
||||
[25]: https://www.facebook.com/NetworkWorld/
|
||||
[26]: https://www.linkedin.com/company/network-world
|
@ -1,82 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Self-learning sensor chips won’t need networks)
|
||||
[#]: via: (https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Self-learning sensor chips won’t need networks
|
||||
======
|
||||
Scientists working on new, machine-learning networks aim to embed everything needed for artificial intelligence (AI) onto a processor, eliminating the need to transfer data to the cloud or computers.
|
||||
![Jiraroj Praditcharoenkul / Getty Images][1]
|
||||
|
||||
Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor.
|
||||
|
||||
“This opens the door for many new applications, starting from real-time evaluation of sensor data,” says [Fraunhofer Institute for Microelectronic Circuits and Systems][2] on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency.
|
||||
|
||||
Plus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be “completely reconfigured to perform a totally different task afterwards,” the institute says. “An embedded system with different tasks is possible.”
|
||||
|
||||
**[ Also read:[What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
|
||||
|
||||
Much internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasn’t changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met.
|
||||
|
||||
### Neural network-on-sensor chip
|
||||
|
||||
The commercial German research organization says it’s developing a specific RISC-V microprocessor with a special hardware accelerator designed for a [brain-copying, artificial neural network (ANN) it has developed][5]. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT).
|
||||
|
||||
Key to Fraunhofer IMS’s [Artificial Intelligence for Embedded Systems (AIfES)][6] is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,”
|
||||
|
||||
It’s “decentralized AI,” says Fraunhofer IMS. "It’s not focused towards big-data processing.”
|
||||
|
||||
Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved.
|
||||
|
||||
“It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says.
|
||||
|
||||
Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.
|
||||
|
||||
### Other edge computing research
|
||||
|
||||
The Fraunhofer researchers aren’t the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech.
|
||||
|
||||
“The idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,” Binghamton said on its website when [I wrote about the university's work last year][7].
|
||||
|
||||
One of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world — replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult.
|
||||
|
||||
Memory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesn’t have to be provided either — it’s been processed at the source, so it can be discarded.
|
||||
|
||||
**More about edge networking:**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][4]
|
||||
* [Edge computing best practices][8]
|
||||
* [How edge computing can help secure the IoT][9]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_by_jiraroj_praditcharoenkul_gettyimages-902668940_2400x1600-100788458-large.jpg
|
||||
[2]: https://www.ims.fraunhofer.de/en.html
|
||||
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[5]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/News/AIfES-Artificial_Intelligence_for_Embedded_Systems.html
|
||||
[6]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/technologies/Artificial-Intelligence-for-Embedded-Systems-AIfES.html
|
||||
[7]: https://www.networkworld.com/article/3326557/edge-chips-could-render-some-networks-useless.html
|
||||
[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -1,53 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What to do when yesterday’s technology won’t meet today’s support needs)
|
||||
[#]: via: (https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html)
|
||||
[#]: author: (Anand Rajaram )
|
||||
|
||||
What to do when yesterday’s technology won’t meet today’s support needs
|
||||
======
|
||||
|
||||
![iStock][1]
|
||||
|
||||
You probably already know that end user technology is exploding and are feeling the effects of it in your support organization every day. Remember when IT sanctioned and standardized every hardware and software instance in the workplace? Those days are long gone. Today, it’s the driving force of productivity that dictates what will or won’t be used – and that can be hard on a support organization.
|
||||
|
||||
Whatever users need to do their jobs better, faster, more efficiently is what you are seeing come into the workplace. So naturally, that’s what comes into your service desk too. Support organizations see all kinds of [devices, applications, systems, and equipment][2], and it’s adding a great deal of complexity and demand to keep up with. In fact, four of the top five factors causing support ticket volumes to rise are attributed to new and current technology.
|
||||
|
||||
To keep up with the steady [rise of tickets][3] and stay out in front of this surge, support organizations need to take a good, hard look at the processes and technologies they use. Yesterday’s methods won’t cut it. The landscape is simply changing too fast. Supporting today’s users and getting them back to work fast requires an expanding set of skills and tools.
|
||||
|
||||
So where do you start with a new technology project? Just because a technology is new or hyped doesn’t mean it’s right for your organization. It’s important to understand your project goals and the experience you really want to create and match your technology choices to those goals. But don’t go it alone. Talk to your teams. Get intimately familiar with how your support organization works today. Understand your customers’ needs at a deep level. And bring the right people to the table to cover:
|
||||
|
||||
* Business problem analysis: What existing business issue are stakeholders unhappy with?
|
||||
* The impact of that problem: How does that issue justify making a change?
|
||||
* Process automation analysis: What area(s) can technology help automate?
|
||||
* Other solutions: Have you considered any other options besides technology?
|
||||
|
||||
|
||||
|
||||
With these questions answered, you’re ready to entertain your technology options. Put together your “must-haves” in a requirements document and reach out to potential suppliers. During the initial information-gathering stage, assess if the supplier understands your goals and how their technology helps you meet them. To narrow the field, compare solutions side by side against your goals. Select the top two or three for more in-depth product demos before moving into product evaluations. By the time you’re ready for implementation, you have empirical, practical knowledge of how the solution will perform against your business goals.
|
||||
|
||||
The key takeaway is this: Technology for technology’s sake is just technology. But technology that drives business value is a solution. If you want a solution that drives results for your organization and your customers, it’s worth following a strategic selection process to match your goals with the best technology for the job.
|
||||
|
||||
For more insight, check out the [LogMeIn Rescue][4] and HDI webinar “[Technology and the Service Desk: Expanding Mission, Expanding Skills”][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html
|
||||
|
||||
作者:[Anand Rajaram][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/istock-1019006240-100798168-large.jpg
|
||||
[2]: https://www.logmeinrescue.com/resources/datasheets/infographic-mobile-support-are-your-employees-getting-what-they-need?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
|
||||
[3]: https://www.logmeinrescue.com/resources/analyst-reports/the-importance-of-remote-support-in-a-shift-left-world?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
|
||||
[4]: https://www.logmeinrescue.com/?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
|
||||
[5]: https://www.brighttalk.com/webcast/8855/312289?utm_source=LogMeIn7&utm_medium=brighttalk&utm_campaign=312289
|
@ -1,89 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 ways to make enterprise IoT cost effective)
|
||||
[#]: via: (https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
6 ways to make enterprise IoT cost effective
|
||||
======
|
||||
Rob Mesirow, a principal at PwC’s Connected Solutions unit, offers tips for successfully implementing internet of things (IoT) projects without breaking the bank.
|
||||
![DavidLeshem / Getty][1]
|
||||
|
||||
There’s little question that the internet of things (IoT) holds enormous potential for the enterprise, in everything from asset tracking to compliance.
|
||||
|
||||
But enterprise uses of IoT technology are still evolving, and it’s not yet entirely clear which use cases and practices currently make economic and business sense. So, I was thrilled to trade emails recently with [Rob Mesirow][2], a principal at [PwC’s Connected Solutions][3] unit, about how to make enterprise IoT implementations as cost effective as possible.
|
||||
|
||||
“The IoT isn’t just about technology (hardware, sensors, software, networks, communications, the cloud, analytics, APIs),” Mesirow said, “though tech is obviously essential. It also includes ensuring cybersecurity, managing data governance, upskilling the workforce and creating a receptive workplace culture, building trust in the IoT, developing interoperability, and creating business partnerships and ecosystems—all part of a foundation that’s vital to a successful IoT implementation.”
|
||||
|
||||
**[ Also read:[Enterprise IoT: Companies want solutions in these 4 areas][4] ]**
|
||||
|
||||
Yes, that sounds complicated—and a lot of work for a still-hard-to-quantify return. Fortunately, though, Mesirow offered up some tips on how companies can make their IoT implementations as cost effective as possible.
|
||||
|
||||
### 1\. Don’t wait for better technology
|
||||
|
||||
Mesirow advised against waiting to implement IoT projects until you can deploy emerging technology such as [5G networks][5]. That makes sense, as long as your implementation doesn’t specifically require capabilities available only in the new technology.
|
||||
|
||||
### 2\. Start with the basics, and scale up as needed
|
||||
|
||||
“Companies need to start with the basics—building one app/task at a time—instead of jumping ahead with enterprise-wide implementations and ecosystems,” Mesirow said.
|
||||
|
||||
“There’s no need to start an IoT initiative by tackling a huge, expensive ecosystem. Instead, begin with one manageable use case, and build up and out from there. The IoT can inexpensively automate many everyday tasks to increase effectiveness, employee productivity, and revenue.”
|
||||
|
||||
After you pick the low-hanging fruit, it’s time to become more ambitious.
|
||||
|
||||
“After getting a few successful pilots established, businesses can then scale up as needed, building on the established foundation of business processes, people experience, and technology," Mesirow said,
|
||||
|
||||
### 3\. Make dumb things smart
|
||||
|
||||
Of course, identifying the ripest low-hanging fruit isn’t always easy.
|
||||
|
||||
“Companies need to focus on making dumb things smart, deploying infrastructure that’s not going to break the bank, and providing enterprise customers the opportunity to experience what data intelligence can do for their business,” Mesirow said. “Once they do that, things will take off.”
|
||||
|
||||
### 4\. Leverage lower-cost networks
|
||||
|
||||
“One key to building an IoT inexpensively is to use low-power, low-cost networks (Low-Power Wide-Area Networks (LPWAN)) to provide IoT services, which reduces costs significantly,” Mesirow said.
|
||||
|
||||
Naturally, he mentioned that PwC has three separate platforms with some 80 products that hang off those platforms, which he said cost “a fraction of traditional IoT offerings, with security and privacy built in.”
|
||||
|
||||
Despite the product pitch, though, Mesirow is right to call out the efficiencies involved in using low-cost, low-power networks instead of more expensive existing cellular.
|
||||
|
||||
### 5\. Balance security vs. cost
|
||||
|
||||
Companies need to plan their IoT network with costs vs. security in mind, Mesirow said. “Open-source networks will be less expensive, but there may be security concerns,” he said.
|
||||
|
||||
That’s true, of course, but there may be security concerns in _any_ network, not just open-source solutions. Still, Mesirow’s overall point remains valid: Enterprises need to carefully consider all the trade-offs they’re making in their IoT efforts.
|
||||
|
||||
### 6\. Account for _all_ the value IoT provides
|
||||
|
||||
Finally, Mesirow pointed out that “much of the cost-effectiveness comes from the _value_ the IoT provides,” and its important to consider the return, not just the investment.
|
||||
|
||||
“For example,” Mesirow said, the IoT “increases productivity by enabling the remote monitoring and control of business operations. It saves on energy costs by automatically turning off lights and HVAC when spaces are vacant, and predictive maintenance alerts lead to fewer machine repairs. And geolocation can lead to personalized marketing to customer smartphones, which can increase sales to nearby stores.”
|
||||
|
||||
**[ Now read this:[5 reasons the IoT needs its own networks][6] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/money_financial_salary_growth_currency_by-davidleshem-100787975-large.jpg
|
||||
[2]: https://twitter.com/robmesirow
|
||||
[3]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
|
||||
[4]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
||||
[5]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[6]: https://www.networkworld.com/article/3284506/5-reasons-the-iot-needs-its-own-networks.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,68 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The carbon footprints of IT shops that train AI models are huge)
|
||||
[#]: via: (https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The carbon footprints of IT shops that train AI models are huge
|
||||
======
|
||||
Artificial intelligence (AI) model training can generate five times more carbon dioxide than a car does in a lifetime, researchers at the University of Massachusetts, Amherst find.
|
||||
![ipopba / Getty Images][1]
|
||||
|
||||
A new research paper from the University of Massachusetts, Amherst looked at the carbon dioxide (CO2) generated over the course of training several common large artificial intelligence (AI) models and found that the process can generate nearly five times the amount as an average American car over its lifetime plus the process of making the car itself.
|
||||
|
||||
The [paper][2] specifically examined the model training process for natural-language processing (NLP), which is how AI handles natural language interactions. The study found that during the training process, more than 626,000 pounds of carbon dioxide is generated.
|
||||
|
||||
This is significant, since AI training is one IT process that has remained firmly on-premises and not moved to the cloud. Very expensive equipment is needed, as is large volumes of data, so the cloud isn’t right work for most AI training, and the report notes this. Plus, IT shops want to keep that kind of IP in house. So, if you are experimenting with AI, that power bill is going to go up.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][3] ]**
|
||||
|
||||
While the report used carbon dioxide as a measure, that’s still the product of electricity generation. Training involves the use of the most powerful processors, typically Nvidia GPUs, and they are not known for being low-power draws. And as the paper notes, “model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time.”
|
||||
|
||||
Training is the most processor-intensive portion of AI. It can take days, weeks, or even months to “learn” what the model needs to know. That means power-hungry Nvidia GPUs running at full utilization for the entire time. In this case, how to handle and process natural language questions rather than broken sentences of keywords like your typical Google search.
|
||||
|
||||
The report said training one model with a neural architecture generated 626,155 pounds of CO2. By contrast, one passenger flying round trip between New York and San Francisco would generate 1,984 pounds of CO2, an average American would generate 11,023 pounds in one year, and a car would generate 126,000 pounds over the course of its lifetime.
|
||||
|
||||
### How the researchers calculated the CO2 amounts
|
||||
|
||||
The researchers used four models in the NLP field that have been responsible for the biggest leaps in performance. They are Transformer, ELMo, BERT, and GPT-2. They trained all of the models on a single Nvidia Titan X GPU, with the exception of ELMo which was trained on three Nvidia GTX 1080 Ti GPUs. Each model was trained for a maximum of one day.
|
||||
|
||||
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][4] ]**
|
||||
|
||||
They then used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the U.S.
|
||||
|
||||
The big takeaway is that computational costs start out relatively inexpensive, but they mushroom when additional tuning steps were used to increase the model’s final accuracy. A tuning process known as neural architecture search ([NAS][5]) is the worst offender because it does so much processing. NAS is an algorithm that searches for the best neural network architecture. It is seriously advanced AI and requires the most processing time and power.
|
||||
|
||||
The researchers suggest it would be beneficial to directly compare different models to perform a cost-benefit (accuracy) analysis.
|
||||
|
||||
“To address this, when proposing a model that is meant to be re-trained for downstream use, such as re-training on a new domain or fine-tuning on a new task, authors should report training time and computational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subsequent consumers of these models to accurately assess whether the required computational resources,” the authors wrote.
|
||||
|
||||
They also say researchers who are cost-constrained should pool resources and avoid the cloud, as cloud compute time is more expensive. In an example, it said a GPU server with eight Nvidia 1080 Ti GPUs and supporting hardware is available for approximately $20,000. To develop the sample models used in their study, that hardware would cost $145,000, plus electricity to run the models, about half the estimated cost to use on-demand cloud GPUs.
|
||||
|
||||
“Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers,” they wrote.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_artificial-intelligence_hand-on-virtual-screen-100795246-large.jpg
|
||||
[2]: https://arxiv.org/abs/1906.02243
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
|
||||
[5]: https://www.oreilly.com/ideas/what-is-neural-architecture-search
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,95 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT security vs. privacy: Which is a bigger issue?)
|
||||
[#]: via: (https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
IoT security vs. privacy: Which is a bigger issue?
|
||||
======
|
||||
When it comes to the internet of things (IoT), security has long been a key concern. But privacy issues could be an even bigger threat.
|
||||
![Ring][1]
|
||||
|
||||
If you follow the news surrounding the internet of things (IoT), you know that security issues have long been a key concern for IoT consumers, enterprises, and vendors. Those issues are very real, but I’m becoming increasingly convinced that related but fundamentally different _privacy_ vulnerabilities may well be an even bigger threat to the success of the IoT.
|
||||
|
||||
In June alone, we’ve seen a flood of IoT privacy issues inundate the news cycle, and observers are increasingly sounding the alarm that IoT users should be paying attention to what happens to the data collected by IoT devices.
|
||||
|
||||
**[ Also read:[It’s time for the IoT to 'optimize for trust'][2] and [A corporate guide to addressing IoT security][2] ]**
|
||||
|
||||
Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost [two-thirds (63%) of consumers to describe IoT data collection as “creepy,”][3] while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.”
|
||||
|
||||
### Ring becoming the poster child for IoT privacy issues
|
||||
|
||||
As a case in point, let’s look at the case of [Ring, the IoT doorbell company now owned by Amazon][4]. Ring is [reportedly working with police departments to build a video surveillance network in residential neighborhoods][5]. Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. (While [Ring touts the security benefits][6] of working with law enforcement, it has asked police departments to end the practice of _requiring_ users to hand over footage, as it appears to violate the devices’ terms of service.)
|
||||
|
||||
Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, [Ring workers in Ukraine reportedly watched customer feeds][7]. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.
|
||||
|
||||
### Guilty by video?
|
||||
|
||||
According to [Motherboard][8], “Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.” While the police are apparently appreciative of the “additional eyes that may see this woman and recognize her,” the ad calls the woman a thief even though she has not been charged with a crime, much less convicted!
|
||||
|
||||
Ring may be today’s poster child for IoT privacy issues, but IoT privacy complaints are widespread. In many cases, it comes down to what IoT users—or others nearby—are getting in return for giving up their privacy. According to the [Guardian][9], for example, Google’s Sidewalk Labs smart city project is little more than “surveillance capitalism.” And while car owners may get a discount on auto insurance in return for sharing their driving data, that relationship is hardly set in stone. It may not be long before drivers have to give up their data just to get insurance at all.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
And as the recent [data breach at the U.S. Customs and Border Protection][11] once again demonstrates, private data is “[a genie without a bottle][12].” No matter what legal or technical protections are put in place, the data may always be revealed or used in unforeseen ways. Heck, when you put it all together, it’s enough to make you wonder [whether doorbells really need to be smart][13] at all?
|
||||
|
||||
**Read more about IoT:**
|
||||
|
||||
* [Google’s biggest, craziest ‘moonshot’ yet][14]
|
||||
* [What is the IoT? How the internet of things works][15]
|
||||
* [What is edge computing and how it’s changing the network][16]
|
||||
* [Most powerful internet of things companies][17]
|
||||
* [10 Hot IoT startups to watch][18]
|
||||
* [The 6 ways to make money in IoT][19]
|
||||
* [What is digital twin technology? [and why it matters]][20]
|
||||
* [Blockchain, service-centric networking key to IoT success][21]
|
||||
* [Getting grounded in IoT networking and security][22]
|
||||
* [Building IoT-ready networks must become a priority][23]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][24]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/ringvideodoorbellpro-100794084-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[3]: https://www.cpomagazine.com/data-privacy/consumers-still-concerned-about-iot-security-and-privacy-issues/
|
||||
[4]: https://www.cnbc.com/2018/02/27/amazon-buys-ring-a-former-shark-tank-reject.html
|
||||
[5]: https://www.cnet.com/features/amazons-helping-police-build-a-surveillance-network-with-ring-doorbells/
|
||||
[6]: https://blog.ring.com/2019/02/14/how-rings-neighbors-creates-safer-more-connected-communities/
|
||||
[7]: https://www.theinformation.com/go/b7668a689a
|
||||
[8]: https://www.vice.com/en_us/article/pajm5z/amazon-home-surveillance-company-ring-law-enforcement-advertisements
|
||||
[9]: https://www.theguardian.com/cities/2019/jun/06/toronto-smart-city-google-project-privacy-concerns
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://www.washingtonpost.com/technology/2019/06/10/us-customs-border-protection-says-photos-travelers-into-out-country-were-recently-taken-data-breach/?utm_term=.0f3a38aa40ca
|
||||
[12]: https://smartbear.com/blog/test-and-monitor/data-scientists-are-sexy-and-7-more-surprises-from/
|
||||
[13]: https://slate.com/tag/should-this-thing-be-smart
|
||||
[14]: https://www.networkworld.com/article/3058036/google-s-biggest-craziest-moonshot-yet.html
|
||||
[15]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[16]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[17]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[18]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[19]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[20]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[21]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[22]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[23]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[24]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[25]: https://www.facebook.com/NetworkWorld/
|
||||
[26]: https://www.linkedin.com/company/network-world
|
@ -1,121 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Software Defined Perimeter (SDP): Creating a new network perimeter)
|
||||
[#]: via: (https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Software Defined Perimeter (SDP): Creating a new network perimeter
|
||||
======
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.
|
||||
|
||||
More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.
|
||||
|
||||
The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it's just a matter of time. Someone with enough skill will eventually get through.
|
||||
|
||||
**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
|
||||
|
||||
### Environmental changes – the cloud and mobile workforce
|
||||
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter. Nowadays, we have a very fluid network perimeter with many points of entry.
|
||||
|
||||
Imagine a castle with a portcullis that was used to gain access. To gain entry into the portcullis was easy as we just needed to pass one guard. There was only one way in and one way out. But today, in this digital world, we have so many small doors and ways to enter, all of which need to be individually protected.
|
||||
|
||||
This boils down to the introduction of cloud-based application services and changing the location of the perimeter. Therefore, the existing networking equipment used for the perimeter is topologically ill-located. Nowadays, everything that is important is outside the perimeter, such as, remote access workers, SaaS, IaaS and PaaS-based applications.
|
||||
|
||||
Users require access to the resources in various cloud services regardless of where the resources are located, resulting in complex-to-control multi-cloud environments. Objectively, the users do not and should not care where the applications are located. They just require access to the application. Also, the increased use of mobile workforce that demands anytime and anywhere access from a variety of devices has challenged the enterprises to support this dynamic workforce.
|
||||
|
||||
There is also an increasing number of devices, such as, BYOD, on-site contractors, and partners that will continue to grow internal to the network. This ultimately leads to a world of over-connected networks.
|
||||
|
||||
### Over-connected networks
|
||||
|
||||
Over-connected networks result in complex configurations of network appliances. This results in large and complex policies without any context.
|
||||
|
||||
They provide a level of coarse-grained access to a variety of services where the IP address does not correspond to the actual user. Traditional appliances that use static configurations to limit the incoming and outgoing traffic are commonly based on information in the IP packet and the port number.
|
||||
|
||||
Essentially, there is no notion of policy and explanation of why a given source IP address is on the list. This approach fails to take into consideration any notion of trust and dynamically adjust access in relation to the device, users and application request events.
|
||||
|
||||
### Problems with IP addresses
|
||||
|
||||
Back in the early 1990s, RFC 1597 declared three IP ranges reserved for private use: 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. If an end host was configured with one of these addresses, it was considered more secure. However, this assumption of trust was shattered with the passage of time and it still haunts us today.
|
||||
|
||||
Network Address Translation (NAT) also changed things to a great extent. NAT allowed internal trusted hosts to communicate directly with the external untrusted hosts. However, since Transmission Control Protocol (TCP) is bidirectional, it allows the data to be injected by the external hosts while connecting back to the internal hosts.
|
||||
|
||||
Also, there is no contextual information regarding the IP addresses as the sole purpose revolved around connectivity. If you have the IP address of someone, you can connect to them. The authentication was handled higher up in the stack.
|
||||
|
||||
Not only do user’s IP addresses change regularly, but there’s also not a one-to-one correspondence between the users and IP addresses. Anyone can communicate from any IP address they please and also insert themselves between you and the trusted resource.
|
||||
|
||||
Have you ever heard of the 20-year old computer that responds to an internet control message protocol (ICMP) request, yet no one knows where it is? But this would not exist on a zero trust network as the network is dark until the administrator turns the lights on with a whitelist policy rule set. This is contrary to the legacy black policy rule set. You can find more information on zero trust in my course: [Zero Trust Networking: The Big Picture][3].
|
||||
|
||||
Therefore, we can’t just rely on the IP addresses and expect them to do much more other than connect. As a result, we have to move away from the IP addresses and network location as the proxy for access trust. The network location can longer be the driver of network access levels. It is not fully equipped to decide the trust of a device, user or application.
|
||||
|
||||
### Visibility – a major gap
|
||||
|
||||
When we analyze networking and its flaws, visibility is a major gap in today’s hybrid environments. By and large, enterprise networks are complex beasts. More than often networking pros do not have accurate data or insights into who or what is accessing the network resource.
|
||||
|
||||
I.T does not have the visibility in place to detect, for example, insecure devices, unauthorized users and potentially harmful connections that could propagate malware or perform data exfiltration.
|
||||
|
||||
Also, once you know how network elements connect, how do you ensure that they don’t reconnect through a broader definition of connectivity? For this, you need contextual visibility. You need full visibility into the network to see who, what, when, and how they are connecting with the device.
|
||||
|
||||
### What’s the workaround?
|
||||
|
||||
A new approach is needed that enables the application owners to protect the infrastructure located in a public or private cloud and on-premise data center. This new network architecture is known as [software-defined perimeter][4] (SDP). Back in 2013, Cloud Security Alliance (CSA) launched the SDP initiative, a project designed to develop the architecture for creating more robust networks.
|
||||
|
||||
The principles behind SDPs are not entirely new. Organizations within the DoD and Intelligence Communities (IC) have implemented a similar network architecture that is based on authentication and authorization prior to network access.
|
||||
|
||||
Typically, every internal resource is hidden behind an appliance. And a user must authenticate before visibility of the authorized services is made available and access is granted.
|
||||
|
||||
### Applying the zero trust framework
|
||||
|
||||
SDP is an extension to [zero trust][5] which removes the implicit trust from the network. The concept of SDP started with Google’s BeyondCorp, which is the general direction that the industry is heading to right now.
|
||||
|
||||
Google’s BeyondCorp puts forward the idea that the corporate network does not have any meaning. The trust regarding accessing an application is set by a static network perimeter containing a central appliance. This appliance permits the inbound and outbound access based on a very coarse policy.
|
||||
|
||||
However, access to the application should be based on other parameters such as who the user is, the judgment of the security stance of the device, followed by some continuous assessment of the session. Rationally, only then should access be permitted.
|
||||
|
||||
Let’s face it, the assumption that internal traffic can be trusted is flawed and zero trust assumes that all hosts internal to the network are internet facing, thereby hostile.
|
||||
|
||||
### What is software-defined perimeter (SDP)?
|
||||
|
||||
The SDP aims to deploy perimeter functionality for dynamically provisioned perimeters meant for clouds, hybrid environments, and on-premise data center infrastructures. There is often a dynamic tunnel that automatically gets created during the session. That is a one-to-one mapping between the requesting entity and the trusted resource. The important point to note here is that perimeters are formed not solely to obey a fixed location already design by the network team.
|
||||
|
||||
SDP relies on two major pillars and these are the authentication and authorization stages. SDPs require endpoints to authenticate and be authorized first before obtaining network access to the protected entities. Then, encrypted connections are created in real-time between the requesting systems and application infrastructure.
|
||||
|
||||
Authenticating and authorizing the users and their devices before even allowing a single packet to reach the target service, enforces what's known as least privilege at the network layer. Essentially, the concept of least privilege is for an entity to be granted only the minimum privileges that it needs to get its work done. Within a zero trust network, privilege is more dynamic than it would be in traditional networks since it uses many different attributes of activity to determine the trust score.
|
||||
|
||||
### The dark network
|
||||
|
||||
Connectivity is based on a need-to-know model. Under this model, no DNS information, internal IP addresses or visible ports of internal network infrastructure are transmitted. This is the reason why SDP assets are considered as “dark”. As a result, SDP isolates any concerns about the network and application. The applications and users are considered abstract, be it on-premise or in the cloud, which becomes irrelevant to the assigned policy.
|
||||
|
||||
Access is granted directly between the users and their devices to the application and resource, regardless of the underlying network infrastructure. There simply is no concept of inside and outside of the network. This ultimately removes the network location point as a position of advantage and also eliminates the excessive implicit trust that IP addresses offer.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][6]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
|
||||
[3]: http://pluralsight.com/courses/zero-trust-networking-big-picture
|
||||
[4]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
|
||||
[5]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
|
||||
[6]: /contributor-network/signup.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data centers should sell spare UPS capacity to the grid)
|
||||
[#]: via: (https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Data centers should sell spare UPS capacity to the grid
|
||||
======
|
||||
Distributed Energy is gaining traction, providing an opportunity for data centers to sell excess power in data center UPS batteries to the grid.
|
||||
![Getty Images][1]
|
||||
|
||||
The energy storage capacity in uninterruptable power supply (UPS) batteries, languishing often dormant in data centers, could provide new revenue streams for those data centers, says Eaton, a major electrical power management company.
|
||||
|
||||
Excess, grid-generated power, created during times of low demand, should be stored on the now-proliferating lithium-backup power systems strewn worldwide in data centers, Eaton says. Then, using an algorithm tied to grid-demand, electricity should be withdrawn as necessary for grid use. It would then be slid back onto the backup batteries when not needed.
|
||||
|
||||
**[ Read also:[How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
The concept is called Distributed Energy and has been gaining traction in part because electrical generation is changing—emerging green power, such as wind and solar, being used now at the grid-level have considerations that differ from the now-retiring, fossil-fuel power generation. You can generate solar only in daylight, yet much demand takes place on dark evenings, for example.
|
||||
|
||||
Coal, gas, and oil deliveries have always been, to a great extent, pre-planned, just-in-time, and used for electrical generation in real time. Nowadays, though, fluctuations between supply, storage, and demand are kicking in. Electricity storage on the grid is required.
|
||||
|
||||
Eaton says that by piggy-backing on existing power banks, electricity distribution could be evened out better. The utilities would deliver power more efficiently, despite the peaks and troughs in demand—with the data center UPS, in effect, acting like a quasi-grid-power storage battery bank, or virtual power plant.
|
||||
|
||||
The objective of this UPS use case, called EnergyAware, is to regulate frequency in the grid. That’s related to the tolerances needed to make the grid work—the cycles per second, or hertz, inherent in electrical current can’t deviate too much. Abnormalities happen if there’s a suddent spike in demand but no power on hand to supply the surge.
|
||||
|
||||
### How the Distributed Energy concept works
|
||||
|
||||
The distributed energy resource (DER), which can be added to any existing lithium-ion battery bank, in any building, allows for the consumption of energy, or the distribution of it, based on a Frequency Regulation grid-demand algorithm. It charges or discharges the backup battery, connected to the grid, thus balancing the grid frequency.
|
||||
|
||||
Often, not much power will need to be removed, just “micro-bursts of energy,” explains Sean James, director of Energy Research at Microsoft, in an Eaton-published promotional video. Microsoft Innovation Center in Virginia has been working with Eaton on the project. Those bursts are enough to get the frequency tolerances back on track, but the UPS still functions as designed.
|
||||
|
||||
Eaton says data centers should start participating in energy markets. That could mean bidding, as a producer of power, to those who need to buy it—the electricity market, also known as the grid. Data centers could conceivably even switch on generators to operate the data halls if the price for its battery-stored power was particularly lucrative at certain times.
|
||||
|
||||
“A data center in the future wouldn’t just be a huge load on the grid,” James says. “In the future, you don’t have a data center or a power plant. It’s something in the middle. A data plant,” he says on the Eaton [website][4].
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/business_continuity_server-100777720-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.eaton.com/us/en-us/products/backup-power-ups-surge-it-power-distribution/backup-power-ups/dual-purpose-ups-technology.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 transferable higher-education skills)
|
||||
[#]: via: (https://opensource.com/article/19/6/5-transferable-higher-education-skills)
|
||||
[#]: author: (Stephon Brown https://opensource.com/users/stephb)
|
||||
|
||||
5 transferable higher-education skills
|
||||
======
|
||||
If you're moving from the Ivory Tower to the Matrix, you already have
|
||||
the foundation for success in the developer role.
|
||||
![Two hands holding a resume with computer, clock, and desk chair ][1]
|
||||
|
||||
My transition from a higher-education professional into the tech realm was comparable to moving from a pond into an ocean. There was so much to learn, and after learning, there was still so much more to learn!
|
||||
|
||||
Rather than going down the rabbit hole and being overwhelmed by what I did not know, in the last two to three months, I have been able to take comfort in the realization that I was not entirely out of my element as a developer. The skills I acquired during my six years as a university professional gave me the foundation to be successful in the developer role.
|
||||
|
||||
These skills are transferable in any direction you plan to go within or outside tech, and it's valuable to reflect on how they apply to your new position.
|
||||
|
||||
### 1\. Composition and documentation
|
||||
|
||||
Higher education is replete with opportunities to develop skills related to composition and communication. In most cases, clear writing and communication are mandatory requirements for university administrative and teaching positions. Although you may not yet be well-versed in deep technical concepts, learning documentation and writing your progress may be two of the strongest skills you bring as a former higher education administrator. All of those "In response to…" emails will finally come in handy when describing the inner workings of a class or leaving succinct comments for other developers to follow what you have implemented.
|
||||
|
||||
### 2\. Problem-solving and critical thinking
|
||||
|
||||
Whether you've been an adviser who sits with students and painstakingly develops class schedules for graduation or a finance buff who balances government funds, you will not leave critical thinking behind as you transition into a developer role. Although your critical thinking may have seemed specialized for your work, the skill of turning problems into opportunities is not lost when contributing to code. The experience gained while spending long days and nights revising recruitment strategies will be necessary when composing algorithms and creative ways of delivering data. Continue to foster a passion for solving problems, and you will not have any trouble becoming an efficient and skillful developer.
|
||||
|
||||
### 3\. Communication
|
||||
|
||||
Though it may seem to overlap with writing (above), communication spans verbal and written disciplines. When you're interacting with clients and leadership, you may have a leg up over your peers because of your higher-education experience. Being approachable and understanding how to manage interactions are skills that some software practitioners may not have fostered to an impactful level. Although you will experience days of staring at a screen and banging your head against the keyboard, you can rest well in knowing you can describe technical concepts and interact with a wide range of audiences, from clients to peers.
|
||||
|
||||
### 4\. Leadership
|
||||
|
||||
Sitting on that panel; planning that event; leading that workshop. All of those experiences provide you with the grounding to plan and lead smaller projects as a new developer. Leadership is not limited to heading up large and small teams; its essence lies in taking initiative. This can be volunteering to do research on a new feature or do more extensive unit tests for your code. However you use it, your foundation as an educator will allow you to go further in technology development and maintenance.
|
||||
|
||||
### 5\. Research
|
||||
|
||||
You can Google with the best of them. Being able to clearly truncate your query into the idea you are searching for is characteristic of a higher-education professional. Most administrator or educator jobs focus on solving problems in a defined process for qualitative, quantitative, or mixed results; therefore, cultivating your scientific mind is valuable when providing software solutions. Your research skills also open opportunities for branching into data science and machine learning.
|
||||
|
||||
### Bonus: Collaboration
|
||||
|
||||
Being able to reach across various offices and fields for event planning and program implementation fit well within team collaboration—both within your new team and across development teams. This may leak into the project management realm, but being able to plan and divide work between teams and establish accountability will allow you as a new developer to understand the software development lifecycle process a little more intimately because of your past related experience.
|
||||
|
||||
### Summary
|
||||
|
||||
As a developer jumping head-first into technology after years of walking students through the process of navigating higher education, [imposter syndrome][2] has been a constant fear since moving into technology. However, I have been able to take heart in knowing my experience as an educator and an administrator has not gone in vain. If you are like me, be encouraged in knowing that these transferable skills, some of which fall into the soft-skills and other categories, will continue to benefit you as a developer and a professional.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/5-transferable-higher-education-skills
|
||||
|
||||
作者:[Stephon Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stephb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
|
||||
[2]: https://en.wikipedia.org/wiki/Impostor_syndrome
|
@ -1,82 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (17 predictions about 5G networks and devices)
|
||||
[#]: via: (https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
17 predictions about 5G networks and devices
|
||||
======
|
||||
Not surprisingly, the new Ericsson Mobility Report is bullish on the potential of 5G technology. Here’s a quick look at the most important numbers.
|
||||
![Vertigo3D / Getty Images][1]
|
||||
|
||||
_“As market after market switches on 5G, we are at a truly momentous point in time. No previous generation of mobile technology has had the potential to drive economic growth to the extent that 5G promises. It goes beyond connecting people to fully realizing the Internet of Things (IoT) and the Fourth Industrial Revolution.”_ —The opening paragraph of the [June 2019 Ericsson Mobility Report][2]
|
||||
|
||||
Almost every significant technology advancement now goes through what [Gartner calls the “hype cycle.”][3] These days, Everyone expects new technologies to be met with gushing optimism and dreamy visions of how it’s going to change the world in the blink of an eye. After a while, we all come to expect the vendors and the press to go overboard with excitement, at least until reality and disappointment set in when things don’t pan out exactly as expected.
|
||||
|
||||
**[ Also read:[The time of 5G is almost here][4] ]**
|
||||
|
||||
Even with all that in mind, though, Ericsson’s whole-hearted embrace of 5G in its Internet Mobility Report is impressive. The optimism is backed up by lots of numbers, but they can be hard to tease out of the 36-document. So, let’s recap some of the most important top-line predictions (with my comments at the end).
|
||||
|
||||
### Worldwide 5G growth projections
|
||||
|
||||
1. “More than 10 million 5G subscriptions are projected worldwide by the end of 2019.”
|
||||
2. “[We] now expect there to be 1.9 billion 5G subscriptions for enhanced mobile broadband by the end of 2024. This will account for over 20 percent of all mobile subscriptions at that time. The peak of LTE subscriptions is projected for 2022, at around 5.3 billion subscriptions, with the number declining slowly thereafter.”
|
||||
3. “In 2024, 5G networks will carry 35 percent of mobile data traffic globally.”
|
||||
4. “5G can cover up to 65 percent of the world’s population in 2024.”
|
||||
5. ”NB-IoT and Cat-M technologies will account for close to 45 percent of cellular IoT connections in 2024.”
|
||||
6. “By the end of 2024, nearly 35 percent of cellular IoT connections will be Broadband IoT, with 4G connecting the majority.” But 5G connections will support more advanced use cases.
|
||||
7. “Despite challenging 5G timelines, device suppliers are expected to be ready with different band and architecture support in a range of devices during 2019.”
|
||||
8. “Spectrum sharing … chipsets are currently in development and are anticipated to be in 5G commercial devices in late 2019."
|
||||
9. “[VoLTE][5] is the foundation for enabling voice and communication services on 5G devices. Subscriptions are expected to reach 2.1 billion by the end of 2019. … The number of VoLTE subscriptions is projected to reach 5.9 billion by the end of 2024, accounting for more than 85 percent of combined LTE and 5G subscriptions.”
|
||||
|
||||
|
||||
|
||||
![][6]
|
||||
|
||||
### Regional 5G projections
|
||||
|
||||
1. “In North America, … service providers have already launched commercial 5G services, both for fixed wireless access and mobile. … By the end of 2024, we anticipate close to 270 million 5G subscriptions in the region, accounting for more than 60 percent of mobile subscriptions.”
|
||||
2. “In Western Europe … The momentum for 5G in the region was highlighted by the first commercial launch in April. By the end of 2024, 5G is expected to account for around 40 percent of mobile subscriptions.
|
||||
3. In Central and Eastern Europe, … The first 5G subscriptions are expected in 2019, and will make up 15 percent of subscriptions in 2024.”
|
||||
4. “In North East Asia, … the region’s 5G subscription penetration is projected to reach 47 percent [by the end of 2024].
|
||||
5. “[In India,] 5G subscriptions are expected to become available in 2022 and will represent 6 percent of mobile subscriptions at the end of 2024.”
|
||||
6. “In the Middle East and North Africa, we anticipate commercial 5G deployments with leading communications service providers during 2019, and significant volumes in 2021. … Around 60 million 5G subscriptions are forecast for the end of 2024, representing 3 percent of total mobile subscriptions.”
|
||||
7. “Initial 5G commercial devices are expected in the [South East Asia and Oceania] region during the first half of 2019. By the end of 2024, it is anticipated that almost 12 percent of subscriptions in the region will be for 5G.]”
|
||||
8. “In Latin America … the first 5G deployments will be possible in the 3.5GHz band during 2019. Argentina, Brazil, Chile, Colombia, and Mexico are anticipated to be the first countries in the region to deploy 5G, with increased subscription uptake forecast from 2020. By the end of 2024, 5G is set to make up 7 percent of mobile subscriptions.”
|
||||
|
||||
|
||||
|
||||
### Is 5G really so inevitable?
|
||||
|
||||
Considered individually, these predictions all seem perfectly reasonable. Heck, 10 million 5G subscriptions is only a drop in the global bucket. And rumors are already flying that Apple’s next round of iPhones will include 5G capability. Also, 2024 is still five years in the future, so why wouldn’t the faster connections drive impressive traffic stats? Similarly, North America and North East Asia will experience the fastest 5G penetration.
|
||||
|
||||
But when you look at them all together, these numbers project a sense of 5G inevitability that could well be premature. It will take a _lot_ of spending, by a lot of different parties—carriers, chip makers, equipment vendors, phone manufacturers, and consumers—to make this kind of growth a reality.
|
||||
|
||||
I’m not saying 5G won’t take over the world. I’m just saying that when so many things have to happen in a relatively short time, there are a lot of opportunities for the train to jump the tracks. Don’t be surprised if it takes longer than expected for 5G to turn into the worldwide default standard Ericsson—and everyone else—seems to think it will inevitably become.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/5g_wireless_technology_network_connections_by_credit-vertigo3d_gettyimages-1043302218_3x2-100787550-large.jpg
|
||||
[2]: https://www.ericsson.com/assets/local/mobility-report/documents/2019/ericsson-mobility-report-june-2019.pdf
|
||||
[3]: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
|
||||
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[5]: https://www.gsma.com/futurenetworks/technology/volte/
|
||||
[6]: https://images.idgesg.net/images/article/2019/06/ericsson-mobility-report-june-2019-graph-100799481-large.jpg
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,144 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why your workplace arguments aren't as effective as you'd like)
|
||||
[#]: via: (https://opensource.com/open-organization/19/6/barriers-productive-arguments)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland/users/ron-mcfarland)
|
||||
|
||||
Why your workplace arguments aren't as effective as you'd like
|
||||
======
|
||||
Open organizations rely on open conversations. These common barriers to
|
||||
productive argument often get in the way.
|
||||
![Arrows pointing different directions][1]
|
||||
|
||||
Transparent, frank, and often contentious arguments are part of life in an open organization. But how can we be sure those conversations are _productive_ —not _destructive_?
|
||||
|
||||
This is the second installment of a two-part series on how to argue and actually achieve something. In the [first article][2], I mentioned what arguments are (and are not), according to author Sinnott-Armstrong in his book _Think Again: How to Reason and Argue._ I also offered some suggestions for making arguments as productive as possible.
|
||||
|
||||
In this article, I'll examine three barriers to productive arguments that Sinnott-Armstrong elaborates in his book: incivility, polarization, and language issues. Finally, I'll explain his suggestions for addressing those barriers.
|
||||
|
||||
### Incivility
|
||||
|
||||
"Incivility" has become a social concern in recent years. Consider this: As a tactic in arguments, incivility _can_ have an effect in certain situations—and that's why it's a common strategy. Sinnott-Armstrong notes that incivility:
|
||||
|
||||
* **Attracts attention:** Incivility draws people's attention in one direction, sometimes to misdirect attention from or outright obscure other issues. It redirects people's attention to shocking statements. Incivility, exaggeration, and extremism can increase the size of an audience.
|
||||
|
||||
* **Energizes:** Sinnott-Armstrong writes that seeing someone being uncivil on a topic of interest can generate energy from a state of powerlessness.
|
||||
|
||||
* **Stimulates memory:** Forgetting shocking statements is difficult; they stick in our memory more easily than statements that are less surprising to us.
|
||||
|
||||
* **Excites the powerless:** The groups most likely to believe and invest in someone being uncivil are those that feel they're powerless and being treated unfairly.
|
||||
|
||||
|
||||
|
||||
|
||||
Unfortunately, incivility as a tactic in arguments has its costs. One such cost is polarization.
|
||||
|
||||
### Polarization
|
||||
|
||||
Sinnott-Armstrong writes about five forms of polarization:
|
||||
|
||||
* **Distance:** If two people's or groups' views are far apart according to some relevant scale, have significant disagreements and little common ground, then they're polarized.
|
||||
|
||||
* **Differences:** If two people or groups have fewer values and beliefs _in common_ than they _don't have in common_ , then they're polarized.
|
||||
|
||||
* **Antagonism:** Groups are more polarized the more they feel hatred, disdain, fear, or other negative emotions toward other people or groups.
|
||||
|
||||
* **Incivility:** Groups tend to be more polarized when they talk more negatively about people of the other groups.
|
||||
|
||||
* **Rigidity:** Groups tend to be more polarized when they treat their values as indisputable and will not compromise.
|
||||
|
||||
* **Gridlock:** Groups tend to be more polarized when they're unable to cooperate and work together toward common goals.
|
||||
|
||||
|
||||
|
||||
|
||||
And I'll add one more form of polarization to Sinnott-Armstrong's list:
|
||||
|
||||
* **Non-disclosure:** Groups tend to be more polarized when one or both of the groups refuses to share valid, verifiable information—or when they distract each other with useless or irrelevant information. One of the ways people polarize is by not talking to each other and withhold information. Similarly, they talk on subjects that distract us from the issue at hand. Some issues are difficult to talk about, but by doing so solutions can be explored.
|
||||
|
||||
|
||||
|
||||
### Language issues
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes.
|
||||
|
||||
Language issues can be argument-stoppers Sinnott-Armstrong says. In particular, he outlines some of the following language-related barriers to productive argument.
|
||||
|
||||
* **Guarding:** Using words like "all" can make a statement unbelievable; words like "sometimes" can make a statement too vague.
|
||||
|
||||
* **Assuring:** Simply stating "trust me, I know what I'm talking about," without offering evidence that this is the case, can impede arguments.
|
||||
|
||||
* **Evaluating:** Offering an evaluation of something—like saying "It is good"―without any supporting reasoning.
|
||||
|
||||
* **Discounting:** This involves anticipating what the another person will say and attempting to weaken it as much as possible by framing an argument in a negative way. (Contrast these two sentences, for example: "Ramona is smart but boring" and "Ramona is boring but smart." The difference is subtle, but you'd probably want to spend less time with Ramona if you heard the first statement about her than if you heard the second.)
|
||||
|
||||
|
||||
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes. In addition, Sinnott-Armstrong specifically draws readers' attention to two other language problems that can kill productive debates: vagueness and ambiguity.
|
||||
|
||||
* **Vagueness:** This occurs when a word or sentence is not precise enough and having many ways to interpret its true meaning and intent, which leads to confusion. Consider the sentence "It is big." "It" must be defined if it's not already obvious to everyone in the conversation. And a word like "big" must be clarified through comparison to something that everyone has agreed upon.
|
||||
|
||||
* **Ambiguity:** This occurs when a sentence could have two distinct meanings. For example: "Police killed man with axe." Who was holding the axe, the man or the police? "My neighbor had a friend for dinner." Did your neighbor invite a friend to share a meal—or did she eat her friend?
|
||||
|
||||
|
||||
|
||||
|
||||
### Overcoming barriers
|
||||
|
||||
To help readers avoid these common roadblocks to productive arguments, Sinnott-Armstrong recommends a simple, four-step process for evaluating another person's argument.
|
||||
|
||||
1. **Observation:** First, observe a stated opinion and its related evidence to determine the precise nature of the claim. This might require you to ask some questions for clarification (you'll remember I employed this technique when arguing with my belligerent uncle, which I described [in the first article of this series][2]).
|
||||
|
||||
2. **Hypothesis:** Develop some hypothesis about the argument. In this case, the hypothesis should be an inference based on generally acceptable standards (for more on the structure of arguments themselves, also see [the first part of this series][2]).
|
||||
|
||||
3. **Comparison:** Compare that hypothesis with others and evaluate which is more accurate. More important issues will require you to conduct more comparisons. In other cases, premises are so obvious that no further explanation is required.
|
||||
|
||||
4. **Conclusion:** From the comparison analysis, reach a conclusion about whether your hypothesis about a competing argument is correct.
|
||||
|
||||
|
||||
|
||||
|
||||
In many cases, the question is not whether a particular claim is _correct_ or _incorrect_ , but whether it is _believable._ So Sinnott-Armstrong also offers a four-step "believability test" for evaluating claims of this type.
|
||||
|
||||
1. **Expertise:** Does the person presenting the argument have authority in an appropriate field? Being a specialist is one field doesn't necessarily make that person an expert in another.
|
||||
|
||||
2. **Motive:** Would self-interest or other personal motives compel a person to withhold information or make false statements? To confirm one's statements, it might be wise to seek a totally separate, independent authority for confirmation.
|
||||
|
||||
3. **Sources:** Are the sources the person offers as evidence of a claim recognized experts? Do those sources have the expertise on the specific issue addressed?
|
||||
|
||||
4. **Agreement:** Is there agreement among many experts within the same specialty?
|
||||
|
||||
|
||||
|
||||
|
||||
### Let's argue
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs.
|
||||
|
||||
When I was a university student, I would usually sit toward the front of the classroom. When I didn't understand something, I would start asking questions for clarification. Everyone else in the class would just sit silently, saying nothing. After class, however, other students would come up to me and thank me for asking those questions—because everyone else in the room was confused, too.
|
||||
|
||||
Clarification is a powerful act—not just in the classroom, but during arguments anywhere. Building an organizational culture in which people feel empowered to ask for clarification is critical for productive arguments (I've [given presentations on this topic][3] before). If members have the courage to clarify premises, and they can do so in an environment where others don't think they're being belligerent, then this might be the key to a successful and productive argument.
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs. Then, practice some of Sinnott-Armstrong's suggestions. Arguing productively will enhance [transparency, inclusivity, and collaboration][4] in your organization—leading to a more open culture.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/6/barriers-productive-arguments
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/directions-arrows.png?itok=EE3lFewZ (Arrows pointing different directions)
|
||||
[2]: https://opensource.com/open-organization/19/5/productive-arguments
|
||||
[3]: https://www.slideshare.net/RonMcFarland1/argue-successfully-achieve-something
|
||||
[4]: https://opensource.com/open-organization/resources/open-org-definition
|
@ -1,67 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (With Tableau, SaaS king Salesforce becomes a hybrid cloud company)
|
||||
[#]: via: (https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
With Tableau, SaaS king Salesforce becomes a hybrid cloud company
|
||||
======
|
||||
Once dismissive of software, Salesforce acknowledges the inevitability of the hybrid cloud.
|
||||
![Martyn Williams/IDGNS][1]
|
||||
|
||||
I remember a time when people at Salesforce events would hand out pins that read “Software” inside a red circle with a slash through it. The High Priest of SaaS (a.k.a. CEO Marc Benioff) was so adamant against installed, on-premises software that his keynotes were always comical.
|
||||
|
||||
Now, Salesforce is prepared to [spend $15.7 billion to acquire Tableau Software][2], the leader in on-premises data analytics.
|
||||
|
||||
On the hell-freezes-over scale, this is up there with Microsoft embracing Linux or Apple PR people returning a phone call. Well, we know at least one of those has happened.
|
||||
|
||||
**[ Also read:[Hybrid Cloud: The time for adoption is upon us][3] | Stay in the know: [Subscribe and get daily newsletter updates][4] ]**
|
||||
|
||||
So, why would a company that is so steeped in the cloud, so anti-on-premises software, make such a massive purchase?
|
||||
|
||||
Partly it is because Benioff and company are finally coming to the same conclusion as most everyone else: The hybrid cloud, a mix of on-premises systems and public cloud, is the wave of the future, and pure cloud plays are in the minority.
|
||||
|
||||
The reality is that data is hybrid and does not sit in a single location, and Salesforce is finally acknowledging this, said Tim Crawford, president of Avoa, a strategic CIO advisory firm.
|
||||
|
||||
“I see the acquisition of Tableau by Salesforce as less about getting into the on-prem game as it is a reality of the world we live in. Salesforce needed a solid analytics tool that went well beyond their existing capability. Tableau was that tool,” he said.
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
|
||||
|
||||
Salesforce also understands that they need a better understanding of customers and those data insights that drive customer decisions. That data is both on-prem and in the cloud, Crawford noted. It is in Salesforce, other solutions, and the myriad of Excel spreadsheets spread across employee systems. Tableau crosses the hybrid boundaries and brings a straightforward way to visualize data.
|
||||
|
||||
Salesforce had analytics features as part of its SaaS platform, but it was geared around their own platform, whereas everyone uses Tableau and Tableau supports all manner of analytics.
|
||||
|
||||
“There’s a huge overlap between Tableau customers and Salesforce customers,” Crawford said. “The data is everywhere in the enterprise, not just in Salesforce. Salesforce does a great job with its own data, but Tableau does great with data in a lot of places because it’s not tied to one platform. So, it opens up where the data comes from and the insights you get from the data.”
|
||||
|
||||
Crawford said that once the deal is done and Tableau is under some deeper pockets, the organization may be able to innovate faster or do things they were unable to do prior. That hardly indicates Tableau was struggling, though. It pulled in [$1.16 billion in revenue][6] in 2018.
|
||||
|
||||
Crawford also expects Salesforce to push Tableau to open up new possibilities for customer insights by unlocking customer data inside and outside of Salesforce. One challenge for the two companies is to maintain that neutrality so that they don’t lose the ability to use Tableau for non-customer centric activities.
|
||||
|
||||
“It’s a beautiful way to visualize large sets of data that have nothing to do with customer centricity,” he said.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/09/150914-salesforce-dreamforce-2-100614575-large.jpg
|
||||
[2]: https://www.cio.com/article/3402026/how-salesforces-tableau-acquisition-will-impact-it.html
|
||||
[3]: http://www.networkworld.com/article/2172875/cloud-computing/hybrid-cloud--the-year-of-adoption-is-upon-us.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[6]: https://www.geekwire.com/2019/tableau-hits-841m-annual-recurring-revenue-41-transition-subscription-model-continues/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,85 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Carrier services help expand healthcare, with 5G in the offing)
|
||||
[#]: via: (https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Carrier services help expand healthcare, with 5G in the offing
|
||||
======
|
||||
Many telehealth initiatives tap into wireless networking supplied by service providers that may start offering services such as Citizen's Band and 5G to support remote medical care.
|
||||
![Thinkstock][1]
|
||||
|
||||
There are connectivity options aplenty for most types of [IoT][2] deployment, but the idea of simply handing the networking part of the equation off to a national licensed wireless carrier could be the best one for certain kinds of deployments in the medical field.
|
||||
|
||||
Telehealth systems, for example, are still a relatively new facet of modern medicine, but they’re already among the most important applications that use carrier networks to deliver care. One such system is operated by the University of Mississippi Medical Center, for the treatment and education of diabetes patients.
|
||||
|
||||
**[More on wireless:[The time of 5G is almost here][3]]**
|
||||
|
||||
**[ Now read[20 hot jobs ambitious IT pros should shoot for][4]. ]**
|
||||
|
||||
Greg Hall is the director of IT at UMMC’s center for telehealth. He said that the remote patient monitoring system is relatively simple by design – diabetes patients receive a tablet computer that they can use to input and track their blood sugar levels, alert clinicians to symptoms like nerve pain or foot sores, and even videoconference with their doctors directly. The tablet connects via Verizon, AT&T or CSpire – depending on who’s got the best coverage in a given area – back to UMMC’s servers.
|
||||
|
||||
According to Hall, there are multiple advantages to using carrier connectivity instead of unlicensed (i.e. purpose-built [Wi-Fi][5] or other technology) to connect patients – some of whom live in remote parts of the state – to their caregivers.
|
||||
|
||||
“We weren’t expecting everyone who uses the service to have Wi-Fi,” he said, “and they can take their tablet with them if they’re traveling.”
|
||||
|
||||
The system serves about 250 patients in Mississippi, up from roughly 175 in the 2015 pilot program that got the effort off the ground. Nor is it strictly limited to diabetes care – Hall said that it’s already been extended to patients suffering from chronic obstructive pulmonary disease, asthma and even used for prenatal care, with further expansion in the offing.
|
||||
|
||||
“The goal of our program isn’t just the monitoring piece, but also the education piece, teaching a person to live with their [condition] and thrive,” he said.
|
||||
|
||||
It hasn’t all been smooth sailing. One issue was caused by the natural foliage of the area, as dense areas of pine trees can cause transmission problems, thanks to their needles being a particularly troublesome length and interfering with 2.5GHz wireless signals. But Hall said that the team has been able to install signal boosters or repeaters to overcome that obstacle.
|
||||
|
||||
Neurologist Dr. Allen Gee’s practice in Wyoming attempts to address a similar issue – far-flung patients with medical needs that might not be addressed by the sparse local-care options. From his main office in Cody, he said, he can cover half the state via telepresence, using a purpose-built system that is based on cellular-data connectivity from TCT, Spectrum and AT&T, as well as remote audiovisual equipment and a link to electronic health records stored in distant locations. That allows him to receive patient data, audio/visual information and even imaging diagnostics remotely. Some specialists in the state are able to fly to those remote locations, others are not.
|
||||
|
||||
While Gee’s preference is to meet with patients in person, that’s just not always possible, he said.
|
||||
|
||||
“Medical specialists don’t get paid for windshield time,” he noted. “Being able to transfer information from an EHR facilitates the process of learning about the patient.”
|
||||
|
||||
### 5G is coming**
|
||||
|
||||
**
|
||||
|
||||
According to Alan Stewart-Brown, vice president at infrastructure management vendor Opengear, there’s a lot to like about current carrier networks for medical use – particularly wide coverage and a lack of interference – but there are bigger things to come.
|
||||
|
||||
“We have customers that have equipment in ambulances for instance, where they’re livestreaming patients’ vital signs to consoles that doctors can monitor,” he said. “They’re using carrier 4G for that right now and it works well enough, but there are limitations, namely latency, which you don’t get on [5G][6].”
|
||||
|
||||
Beyond the simple fact of increased throughput and lower latency, widespread 5G deployments could open a wide array of new possibilities for medical technology, mostly involving real-time, very-high-definition video streaming. These include medical VR, remote surgery and the like.
|
||||
|
||||
“The process you use to do things like real-time video – right now on a 4G network, that may or may not have a delay,” said Stewart-Brown. “Once you can get rid of the delay, the possibilities are endless as to what you can use the technology for.”
|
||||
|
||||
### Citizens band
|
||||
|
||||
Ron Malenfant, chief architect for service provider IoT at Cisco, agreed that the future of 5G for medical IoT is bright, but said that the actual applications of the technology have to be carefully thought out.
|
||||
|
||||
“The use cases need to be worked on,” he said. “The innovative [companies] are starting to say ‘OK, what does 5G mean to me’ and starting to plan use cases.”
|
||||
|
||||
One area that the carriers themselves have been eyeing recently is the CBRS band of radio frequencies, which sits around 3.5GHz. It’s what’s referred to as “lightly licensed” spectrum, in that parts of it are used for things like CB radio and other parts are the domain of the U.S. armed forces, and it could be used to build private networks for institutional users like hospitals, instead of deploying small but expensive 4G cells. The idea is that the institutions would be able to lease those frequencies for their specific area from the carrier directly for private LTE/CBRS networks, and, eventually 5G, Malenfant said.
|
||||
|
||||
There’s also the issue, of course, that there are still a huge amount of unknowns around 5G, which isn’t expected to supplant LTE in the U.S. for at least another year or so. The medical field’s stiff regulatory requirements could also prove a stumbling block for the adoption of newer wireless technology.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/stethoscope_mobile_healthcare_ipad_tablet_doctor_patient-100765655-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[4]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[5]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
|
||||
[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cracks appear in Intel’s grip on supercomputing)
|
||||
[#]: via: (https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Cracks appear in Intel’s grip on supercomputing
|
||||
======
|
||||
New competitors threaten to take Intel’s dominance in the high-performance computing (HPC) world, and we’re not even talking about AMD (yet).
|
||||
![Randy Wong/LLNL][1]
|
||||
|
||||
It’s June, so it’s that time again for the twice-yearly Top 500 supercomputer list, where bragging rights are established or, in most cases, reaffirmed. The list constantly shifts as new trends appear, and one of them might be a break in Intel’s dominance.
|
||||
|
||||
[Supercomputers in the top 10 list][2] include a lot of IBM Power-based systems, and almost all run Nvidia GPUs. But there’s more going on than that.
|
||||
|
||||
For starters, an ARM supercomputer has shown up, at #156. [Astra][3] at Sandia National Laboratories is an HPE system running Cavium (now Marvell) ThunderX2 processors. It debuted on the list at #204 last November, but thanks to upgrades, it has moved up the list. It won’t be the last ARM server to show up, either.
|
||||
|
||||
**[ Also see:[10 of the world's fastest supercomputers][2] | Get daily insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Second is the appearance of four Nvidia DGX servers, with the [DGX SuperPOD][5] ranking the highest at #22. [DGX systems][6] are basically compact GPU boxes with a Xeon just to boot the thing. The GPUs do all the heavy lifting.
|
||||
|
||||
AMD hasn’t shown up yet with the Epyc processors, but it will, given Cray is building them for the government.
|
||||
|
||||
This signals a breaking up of the hold Intel has had on the high-performance computing (HPC) market for a long time, said Ashish Nadkarni, group vice president in IDC's worldwide infrastructure practice. “The Intel hold has already been broken up by all the accelerators in the supercomputing space. The more accelerators they use, the less need they have for Xeons. They can go with other processors that do justice to those accelerators,” he told me.
|
||||
|
||||
With so much work in HPC and artificial intelligence (AI) being done by GPUs, the x86 processor becomes just a boot processor in a way. I wasn’t kidding about the DGX box. It’s got one Xeon and eight Tesla GPUs. And the Xeon is an E5, a midrange part.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][7] ]**
|
||||
|
||||
“They don’t need high-end Xeons in servers any more, although there’s a lot of supercomputers that just use CPUs. The fact is there are so many options now,” said Nadkarni. One example of an all-CPU system is [Frontera][8], a Dell-based system at the Texas Advanced Computing Center in Austin.
|
||||
|
||||
The top two computers, Sierra and Summit, both run IBM Power9 RISC processors, as well as Nvidia GPUs. All told, Nvidia is in 125 of the 500 supercomputers, including five of the top 10, the fastest computer in the world, the fastest in Europe (Piz Daint) and the fastest in Japan (ABCI).
|
||||
|
||||
Lenovo was the top hardware provider, beating out Dell, HPE, and IBM combined. That’s because of its large presence in its native China. Nadkari said Lenovo, which acquired the IBM x86 server business in 2014, has benefitted from the IBM installed base, which has continued wanting the same tech from Lenovo under new ownership.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/sierra875x500-100778404-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[3]: https://www.top500.org/system/179565
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.top500.org/system/179691
|
||||
[6]: https://www.networkworld.com/article/3196088/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[8]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html#slide7
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,77 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Several deals solidify the hybrid cloud’s status as the cloud of choice)
|
||||
[#]: via: (https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Several deals solidify the hybrid cloud’s status as the cloud of choice
|
||||
======
|
||||
On-premises and cloud connections are being built by all the top vendors to bridge legacy and modern systems, creating hybrid cloud environments.
|
||||
![Getty Images][1]
|
||||
|
||||
The hybrid cloud market is expected to grow from $38.27 billion in 2017 to $97.64 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 17.0% during the forecast period, according to Markets and Markets.
|
||||
|
||||
The research firm said the hybrid cloud is rapidly becoming a leading cloud solution, as it provides various benefits, such as cost, efficiency, agility, mobility, and elasticity. One of the many reasons is the need for interoperability standards between cloud services and existing systems.
|
||||
|
||||
Unless you are a startup company and can be born in the cloud, you have legacy data systems that need to be bridged, which is where the hybrid cloud comes in.
|
||||
|
||||
So, in very short order we’ve seen a bunch of new alliances involving the old and new guard, reiterating that the need for hybrid solutions remains strong.
|
||||
|
||||
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
### HPE/Google
|
||||
|
||||
In April, the Hewlett Packard Enterprise (HPE) and Google announced a deal where HPE introduced a variety of server solutions for Google Cloud’s Anthos, along with a consumption-based model for the validated HPE on-premises infrastructure that is integrated with Anthos.
|
||||
|
||||
Following up with that, the two just announced a strategic partnership to create a hybrid cloud for containers by combining HPE’s on-premises infrastructure, Cloud Data Services, and GreenLake consumption model with Anthos. This allows for:
|
||||
|
||||
* Bi-directional data mobility for data mobility and consistent data services between on-premises and cloud
|
||||
* Application workload mobility to move containerized app workloads across on-premises and multi-cloud environments
|
||||
* Multi-cloud flexibility, offering the choice of HPE Cloud Volumes and Anthos for what works best for the workload
|
||||
* Unified hybrid management through Anthos, so customers can get a unified and consistent view of their applications and workloads regardless of where they reside
|
||||
* Charged as a service via HPE GreenLake
|
||||
|
||||
|
||||
|
||||
### IBM/Cisco
|
||||
|
||||
This is a furthering of an already existing partnership between IBM and Cisco designed to deliver a common and secure developer experience across on-premises and public cloud environments for building modern applications.
|
||||
|
||||
[Cisco said it will support IBM Cloud Private][4], an on-premises container application development platform, on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure. This includes support for IBM Cloud Pak for Applications. IBM Cloud Paks deliver enterprise-ready containerized software solutions and developer tools for building apps and then easily moving to any cloud—public or private.
|
||||
|
||||
This architecture delivers a common and secure Kubernetes experience across on-premises (including edge) and public cloud environments. IBM’s Multicloud Manager covers monitoring and management of clusters and container-based applications running from on-premises to the edge, while Cisco’s Virtual Application Centric Infrastructure (ACI) will allow customers to extend their network fabric from on-premises to the IBM Cloud.
|
||||
|
||||
### IBM/Equinix
|
||||
|
||||
Equinix expanded its collaboration with IBM Cloud to bring private and scalable connectivity to global enterprises via Equinix Cloud Exchange Fabric (ECX Fabric). This provides private connectivity to IBM Cloud, including Direct Link Exchange, Direct Link Dedicated and Direct Link Dedicated Hosting, that is secure and scalable.
|
||||
|
||||
ECX Fabric is an on-demand, SDN-enabled interconnection service that allows any business to connect between its own distributed infrastructure and any other company’s distributed infrastructure, including cloud providers. Direct Link provides IBM customers with a connection between their network and IBM Cloud. So ECX Fabric provides IBM customers with a secured and scalable network connection to the IBM Cloud service.
|
||||
|
||||
At the same time, ECX Fabric provides secure connections to other cloud providers, and most customers prefer a multi-vendor approach to avoid vendor lock-in.
|
||||
|
||||
“Each of the partnerships focus on two things: 1) supporting a hybrid-cloud platform for their existing customers by reducing the friction to leveraging each solution and 2) leveraging the unique strength that each company brings. Each of the solutions are unique and would be unlikely to compete directly with other partnerships,” said Tim Crawford, president of Avoa, an IT consultancy.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_hand_plus_sign_private-100787051-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,100 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data Still Dominates)
|
||||
[#]: via: (https://theartofmachinery.com/2019/06/30/data_still_dominates.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Data Still Dominates
|
||||
======
|
||||
|
||||
Here’s [a quote from Linus Torvalds in 2006][1]:
|
||||
|
||||
> I’m a huge proponent of designing your code around the data, rather than the other way around, and I think it’s one of the reasons git has been fairly successful… I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
|
||||
|
||||
Which sounds a lot like [Eric Raymond’s “Rule of Representation” from 2003][2]:
|
||||
|
||||
> Fold knowledge into data, so program logic can be stupid and robust.
|
||||
|
||||
Which was just his summary of ideas like [this one from Rob Pike in 1989][3]:
|
||||
|
||||
> Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
|
||||
|
||||
Which cites [Fred Brooks from 1975][4]:
|
||||
|
||||
> ### Representation is the Essence of Programming
|
||||
>
|
||||
> Beyond craftmanship lies invention, and it is here that lean, spare, fast programs are born. Almost always these are the result of strategic breakthrough rather than tactical cleverness. Sometimes the strategic breakthrough will be a new algorithm, such as the Cooley-Tukey Fast Fourier Transform or the substitution of an n log n sort for an n2 set of comparisons.
|
||||
>
|
||||
> Much more often, strategic breakthrough will come from redoing the representation of the data or tables. This is where the heart of your program lies. Show me your flowcharts and conceal your tables, and I shall be continued to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
|
||||
|
||||
So, smart people have been saying this again and again for nearly half a century: focus on the data first. But sometimes it feels like the most famous piece of smart programming advice that everyone forgets.
|
||||
|
||||
Let me give some real examples.
|
||||
|
||||
### The Highly Scalable System that Couldn’t
|
||||
|
||||
This system was designed from the start to handle CPU-intensive loads with incredible scalability. Nothing was synchronous. Everything was done with callbacks, task queues and worker pools.
|
||||
|
||||
But there were two problems: The first was that the “CPU-intensive load” turned out not to be that CPU-intensive after all — a single task took a few milliseconds at worst. So most of the architecture was doing more harm than good. The second problem was that although it sounded like a highly scalable distributed system, it wasn’t one — it only ran on one machine. Why? Because all communication between asynchronous components was done using files on the local filesystem, which was now the bottleneck for any scaling. The original design didn’t say much about data at all, except to advocate local files in the name of “simplicity”. Most of the document was about all the extra architecture that was “obviously” needed to handle the “CPU-intensiveness” of the load.
|
||||
|
||||
### The Service-Oriented Architecture that was Still Data-Oriented
|
||||
|
||||
This system followed a microservices design, made up of single-purpose apps with REST-style APIs. One component was a database that stored documents (basically responses to standard forms, and other electronic paperwork). Naturally it exposed an API for basic storage and retrieval, but pretty quickly there was a need for more complex search functionality. The designers felt that adding this search functionality to the existing document API would have gone against the principles of microservices design. They could talk about “search” as being a different kind of service from “get/put”, so their architecture shouldn’t couple them together. Besides, the tool they were planning to use for search indexing was separate from the database itself, so creating a new service made sense for implementation, too.
|
||||
|
||||
In the end, a search API was created containing a search index that was essentially a duplicate of the data in the main database. This data was being updated dynamically, so any component that mutated document data through the main database API had to also update the search API. It’s impossible to do this with REST APIs without race conditions, so the two sets of data kept going out of sync every now and then, anyway.
|
||||
|
||||
Despite what the architecture diagram promised, the two APIs were tightly coupled through their data dependencies. Later on it was recognised that the search index should be an implementation detail of a unified document service, and this made the system much more maintainable. “Do one thing” works at the data level, not the verb level.
|
||||
|
||||
### The Fantastically Modular and Configurable Ball of Mud
|
||||
|
||||
This system was a kind of automated deployment pipeline. The original designers wanted to make a tool that was flexible enough to solve deployment problems across the company. It was written as a set of pluggable components, with a configuration file system that not only configured the components, but acted as a [DSL][5] for programming how the components fitted into the pipeline.
|
||||
|
||||
Fast forward a few years and it’s turned into “that program”. There was a long list of known bugs that no one was ever fixing. No one wanted to touch the code out of fear of breaking things. No one used any of the flexibility of the DSL. Everyone who used the program copy-pasted the same known-working configuration that everyone else used.
|
||||
|
||||
What had gone wrong? Although the original design document used words like “modular”, “decoupled”, “extensible” and “configurable” a lot, it never said anything about data. So, data dependencies between components ended up being handled in an ad-hoc way using a globally shared blob of JSON. Over time, components made more and more undocumented assumptions about what was in or not in the JSON blob. Sure, the DSL allowed rearranging components into any order, but most configurations didn’t work.
|
||||
|
||||
### Lessons
|
||||
|
||||
I chose these three examples because they’re easy to explain, not to pick on others. I once tried to build a website, and failed trying to instead build some cringe-worthy XML database that didn’t even solve the data problems I had. Then there’s the project that turned into a broken mockery of half the functionality of `make`, again because I didn’t think about what I really needed. I wrote a post before based on a time I wrote [a castle-in-the-sky OOP class hierarchy that should have been encoded in data instead][6].
|
||||
|
||||
Update:
|
||||
|
||||
Apparently many people still thought I wrote this to make fun of others. People who’ve actually worked with me will know I’m much more interested in the things I’m fixing than in blaming the people who did most of the work building them, but, okay, here’s what I think of the engineers involved.
|
||||
|
||||
Honestly, the first example obviously happened because the designer was more interested in bringing a science project to work than in solving the problem at hand. Most of us have done that (mea culpa), but it’s really annoying to our colleagues who’ll probably have to help maintain them when we’re bored of them. If this sounds like you, please don’t get offended; please just stop. (I’d still rather work on the single-node distributed system than anything built around my “XML database”.)
|
||||
|
||||
There’s nothing personal in the second example. Sometimes it feels like everyone is talking about how wonderful it is to split up services, but no one is talking about exactly when not to. People are learning the hard way all the time.
|
||||
|
||||
The third example was actually from some of the smartest people I’ve ever had the chance to work with.
|
||||
|
||||
(End update.)
|
||||
|
||||
“Does this talk about the problems created by data?” turns out to be a pretty useful litmus test for good systems design. It’s also pretty handy for detecting false expert advice. The hard, messy systems design problems are data problems, so false experts love to ignore them. They’ll show you a wonderfully beautiful architecture, but without talking about what kind of data it’s appropriate for, and (crucially) what kind of data it isn’t.
|
||||
|
||||
For example, a false expert might tell you that you should use a pub/sub system because pub/sub systems are loosely coupled, and loosely coupled components are more maintainable. That sounds nice and results in pretty diagrams, but it’s backwards thinking. Pub/sub doesn’t _make_ your components loosely coupled; pub/sub _is_ loosely coupled, which may or may not match your data needs.
|
||||
|
||||
On the flip side, a well-designed data-oriented architecture goes a long way. Functional programming, service meshes, RPCs, design patterns, event loops, whatever, all have their merits, but personally I’ve seen tools like [boring old databases][7] be responsible for a lot more successfully shipped software.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/06/30/data_still_dominates.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://lwn.net/Articles/193245/
|
||||
[2]: http://www.catb.org/~esr/writings/taoup/html/ch01s06.html
|
||||
[3]: http://doc.cat-v.org/bell_labs/pikestyle
|
||||
[4]: https://archive.org/stream/mythicalmanmonth00fred/mythicalmanmonth00fred_djvu.txt
|
||||
[5]: https://martinfowler.com/books/dsl.html
|
||||
[6]: https://theartofmachinery.com/2016/06/21/code_vs_data.html
|
||||
[7]: https://theartofmachinery.com/2017/10/28/rdbs_considered_useful.html
|
@ -1,49 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SD-WAN Buyers Should Think Application Performance as well as Resiliency)
|
||||
[#]: via: (https://www.networkworld.com/article/3406456/sd-wan-buyers-should-think-application-performance-as-well-as-resiliency.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
SD-WAN Buyers Should Think Application Performance as well as Resiliency
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
As an industry analyst, not since the days of WAN Optimization have I seen a technology gain as much interest as I am seeing with [SD-WANs][2] today. Although full deployments are still limited, nearly every network manager, and often many IT leaders I talk to, are interested in it. The reason for this is two-fold – the WAN has grown in importance for cloud-first enterprises and is badly in need of an overhaul. This hasn’t gone unnoticed by the vendor community as there has been an explosion of companies bringing a broad range of SD-WAN offerings to market. The great news for buyers is that there is no shortage of choices. The bad news is there are too many choices and making the right decision difficult.
|
||||
|
||||
One area of differentiation for SD-WAN vendors is how they handle application performance. I think of the SD-WAN market as being split into two categories – basic and advanced SD-WANs. A good analogy is to think of the virtualization market. There are many vendors that offer hypervisors – in fact there are a number of free ones. So why do companies pay a premium for VMware? It’s because VMware offers many advanced features and capabilities that make its solution do more than just virtualize servers.
|
||||
|
||||
Similarly, basic SD-WAN solutions do a great job of helping to lower costs and to increase application resiliency through path selection capabilities but do nothing to improve application performance. One myth that needs busting is that all SD-WANs make your applications perform better. That’s simply not true as application availability and performance are two different things. It’s possible to have great performance and poor availability or high availability with lackluster performance.
|
||||
|
||||
Consider the case where a business runs a hybrid WAN and voice and video traffic is sent over the MPLS connection and broadband is used for other traffic. If the MPLS link becomes congested, but doesn’t go down, most SD-WAN solutions will continue to send video and voice over it, which obviously degrades the performance. If multiple broadband connections are used, the chances of congestion related issues are even more likely.
|
||||
|
||||
This is an important point for IT professionals to understand. The business justification for SD-WAN was initially built around saving money but if application performance suffers, the entire return on investment (ROI) for the project might as well be tossed out the window. For many companies, the network is the business, so a poor performing network means equally poor performing applications which results lost productivity, lower revenues and possibly brand damage from customer experience issues.
|
||||
|
||||
I’ve talked to many organizations that had digital initiatives fail because the network wasn’t transformed. For example, a luxury retailer implemented a tablet program for in store personnel to be able to show merchandise to customers. High end retail is almost wholly impulse purchases so the more inventory that can be shown to a customer, the larger the resulting sales. The WAN that was in place was causing the mobile application to perform poorly causing the digital initiative to have a negative effect. Instead of driving sales, the mobile initiative was chasing customers from the store. The idea was right but the poor performing WAN caused the project to fail.
|
||||
|
||||
SD-WAN decision makers need to look to suppliers that have specific technologies integrated into it that can act when congestion occurs. A great example of this is the Silver Peak [Unity EdgeConnect™][3] SD-WAN edge platform with [path conditioning][4], [traffic shaping][5] and sub-second link failover. This ensures the best possible quality for all critical applications, even when an underlying link experiences congestion or an outage, even for [voice and video over broadband][6]. This is a foundational component of advanced SD-WAN providers as they offer the same resiliency and cost benefits as a basic SD-WAN but also ensure application performance remains high.
|
||||
|
||||
The SD-WAN era is here, and organizations should be aggressive with deployments as it will transform the WAN and make it a digital transformation enabler. Decision makers should choose their provider carefully and ensure the vendor also improves application performance. Without it, the digital initiatives will likely fail and negate any ROI the company was hoping to realize.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3406456/sd-wan-buyers-should-think-application-performance-as-well-as-resiliency.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/istock-157647179-100800860-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[4]: https://www.silver-peak.com/products/unity-edge-connect/path-conditioning
|
||||
[5]: https://www.silver-peak.com/products-solutions/unity/traffic-shaping
|
||||
[6]: https://www.silver-peak.com/sd-wan/voice-video-over-broadband
|
@ -1,92 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An eco-friendly internet of disposable things is coming)
|
||||
[#]: via: (https://www.networkworld.com/article/3406462/an-eco-friendly-internet-of-disposable-things-is-coming.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
An eco-friendly internet of disposable things is coming
|
||||
======
|
||||
Researchers are creating a non-hazardous, bacteria-powered miniature battery that can be implanted into shipping labels and packaging to monitor temperature and track packages in real time.
|
||||
![Thinkstock][1]
|
||||
|
||||
Get ready for a future of disposable of internet of things (IoT) devices, one that will mean everything is connected to networks. It will be particularly useful in logistics, being used in single-use plastics in retail packaging and throw-away shippers’ carboard boxes.
|
||||
|
||||
How it will happen? The answer is when non-hazardous, disposable bio-batteries make it possible. And that moment might be approaching. Researchers say they’re closer to commercializing a bacteria-powered miniature battery that they say will propel the IoDT.
|
||||
|
||||
**[ Learn more: [Download a PDF bundle of five essential articles about IoT in the enterprise][2] ]**
|
||||
|
||||
The “internet of disposable things is a new paradigm for the rapid evolution of wireless sensor networks,” says Seokheun Choi, an associate professor at Binghamton University, [in an article on the school’s website][3].
|
||||
|
||||
“Current IoDTs are mostly powered by expensive and environmentally hazardous batteries,” he says. Those costs can be significant in any kind of large-scale deployment, he says. And furthermore, with exponential growth, the environmental concerns would escalate rapidly.
|
||||
|
||||
The miniaturized battery that Choi’s team has come up with is uniquely charged through power created by bacteria. It doesn’t have metals and acids in it. And it’s designed specifically to provide energy to sensors and radios in single-use IoT devices. Those could be the kinds of sensors ideal for supply-chain logistics where the container is ultimately going to end up in a landfill, creating a hazard.
|
||||
|
||||
Another use case is real-time analysis of packaged food, with sensors monitoring temperature and location, preventing spoilage and providing safer food handling. For example, a farm product could be tracked for on-time delivery, as well as have its temperature measured, all within the packaging, as it moves from packaging facility to consumer. In the event of a food-borne illness outbreak, say, one can quickly find out where the product originated—which apparently is hard to do now.
|
||||
|
||||
Other use cases could be battery-impregnated shipping labels that send real-time data to the internet. Importantly, in both use cases, packaging can be discarded without added environmental concerns.
|
||||
|
||||
### How the bacteria-powered batteries work
|
||||
|
||||
A slow release of nutrients provide the energy to the bacteria-powered batteries, which the researchers say can last up to eight days. “Slow and continuous reactions” convert the microbial nutrients into “long standing power,” they say in [their paper's abstract][4].
|
||||
|
||||
“Our biobattery is low-cost, disposable, and environmentally-friendly,” Choi says.
|
||||
|
||||
Origami, the Japanese paper-folding skill used to create objects, was an inspiration for a similar microbial-based battery project the group wrote about last year in a paper. This one is liquid-based and not as long lasting. A bacteria-containing liquid was absorbed along the porous creases in folded paper, creating the paper-delivered power source, perhaps to be used in a shipping label.
|
||||
|
||||
“Low-cost microbial fuel cells (MFCs) can be done efficiently by using a paper substrate and origami techniques,” [the group wrote then][5].
|
||||
|
||||
Scientists, too, envisage electronics now printed on circuit boards (PCBs) and can be toxic on disposal being printed entirely on eco-friendly paper. Product cycles, such as those found now in mobile devices and likely in future IoT devices, are continually getting tighter—thus PCBs are increasingly being disposed. Solutions are needed, experts say.
|
||||
|
||||
Put the battery in the paper, too, is the argument here. And while you’re at it, get the biodegradation of the used-up biobattery to help break-down the organic-matter paper.
|
||||
|
||||
Ultimately, Choi believes that the power-creating bacteria could even be introduced naturally by the environment—right now it’s added on by the scientists.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][6]
|
||||
* [What is edge computing and how it’s changing the network][7]
|
||||
* [Most powerful Internet of Things companies][8]
|
||||
* [10 Hot IoT startups to watch][9]
|
||||
* [The 6 ways to make money in IoT][10]
|
||||
* [What is digital twin technology? [and why it matters]][11]
|
||||
* [Blockchain, service-centric networking key to IoT success][12]
|
||||
* [Getting grounded in IoT networking and security][2]
|
||||
* [Building IoT-ready networks must become a priority][13]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][14]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3406462/an-eco-friendly-internet-of-disposable-things-is-coming.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/04/green-data-center-intro-100719502-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[3]: https://www.binghamton.edu/news/story/1867/everything-will-connect-to-the-internet-someday-and-this-biobattery-could-h
|
||||
[4]: https://www.sciencedirect.com/science/article/abs/pii/S0378775319305580
|
||||
[5]: https://www.sciencedirect.com/science/article/pii/S0960148117311606
|
||||
[6]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[13]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[14]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -1,58 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Lessons in Vendor Lock-in: Google and Huawei"
|
||||
[#]: via: "https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei"
|
||||
[#]: author: "Kyle Rankin https://www.linuxjournal.com/users/kyle-rankin"
|
||||
|
||||
Lessons in Vendor Lock-in: Google and Huawei
|
||||
======
|
||||
![](https://www.linuxjournal.com/sites/default/files/styles/850x500/public/nodeimage/story/bigstock-Us--China-Trade-War-Boxing-F-252887971_1.jpg?itok=oZBwXDrP)
|
||||
|
||||
What happens when you're locked in to a vendor that's too big to fail, but is on the opposite end of a trade war?
|
||||
|
||||
The story of Google no longer giving Huawei access to Android updates is still developing, so by the time you read this, the situation may have changed. At the moment, Google has granted Huawei a 90-day window whereby it will have access to Android OS updates, the Google Play store and other Google-owned Android assets. After that point, due to trade negotiations between the US and China, Huawei no longer will have that access.
|
||||
|
||||
Whether or not this new policy between Google and Huawei is still in place when this article is published, this article isn't about trade policy or politics. Instead, I'm going to examine this as a new lesson in vendor lock-in that I don't think many have considered before: what happens when the vendor you rely on is forced by its government to stop you from being a customer?
|
||||
|
||||
### Too Big to Fail
|
||||
|
||||
Vendor lock-in isn't new, but until the last decade or so, it generally was thought of by engineers as a bad thing. Companies would take advantage the fact that you used one of their products that was legitimately good to use the rest of their products that may or may not be as good as those from their competitors. People felt the pain of being stuck with inferior products and rebelled.
|
||||
|
||||
These days, a lot of engineers have entered the industry in a world where the new giants of lock-in are still growing and have only flexed their lock-in powers a bit. Many engineers shrug off worries about choosing a solution that requires you to use only products from one vendor, in particular if that vendor is a large enough company. There is an assumption that those companies are too big ever to fail, so why would it matter that you rely on them (as many companies in the cloud do) for every aspect of their technology stack?
|
||||
|
||||
Many people who justify lock-in with companies who are too big to fail point to all of the even more important companies who use that vendor who would have even bigger problems should that vendor have a major bug, outage or go out of business. It would take so much effort to use cross-platform technologies, the thinking goes, when the risk of going all-in with a single vendor seems so small.
|
||||
|
||||
Huawei also probably figured (rightly) that Google and Android were too big to fail. Why worry about the risks of being beholden to a single vendor for your OS when that vendor was used by other large companies and would have even bigger problems if the vendor went away?
|
||||
|
||||
### The Power of Updates
|
||||
|
||||
Google held a particularly interesting and subtle bit of lock-in power over Huawei (and any phone manufacturer who uses Android)—the power of software updates. This form of lock-in isn't new. Microsoft famously used the fact that software updates in Microsoft Office cost money (naturally, as it was selling that software) along with the fact that new versions of Office had this tendency to break backward compatibility with older document formats to encourage everyone to upgrade. The common scenario was that the upper-level folks in the office would get brand-new, cutting-edge computers with the latest version of Office on them. They would start saving new documents and sharing them, and everyone else wouldn't be able to open them. It ended up being easier to upgrade everyone's version of Office than to have the bosses remember to save new documents in old formats every time.
|
||||
|
||||
The main difference with Android is that updates are critical not because of compatibility, but for security. Without OS updates, your phone ultimately will become vulnerable to exploits that attackers continue to find in your software. The Android OS that ships on phones is proprietary and therefore requires permission from Google to get those updates.
|
||||
|
||||
Many people still don't think of the Android OS as proprietary software. Although people talk about the FOSS underpinnings in Android, only people who go to the extra effort of getting a pure-FOSS version of Android, like LineageOS, on their phones actually experience it. The version of Android most people tend to use has a bit of FOSS in the center, surrounded by proprietary Google Apps code.
|
||||
|
||||
It's this Google Apps code that gives Google the kind of powerful leverage over a company like Huawei. With traditional Android releases, Google controls access to OS updates including security updates. All of this software is signed with Google's signing keys. This system is built with security in mind—attackers can't easily build their own OS update to install on your phone—but it also has a convenient side effect of giving Google control over the updates.
|
||||
|
||||
What's more, the Google Apps suite isn't just a convenient way to load Gmail or Google Docs, it also includes the tight integration with your Google account and the Google Play store. Without those hooks, you don't have access to the giant library of applications that everyone expects to use on their phones. As anyone with a LineageOS phone that uses F-Droid can attest, while a large number of applications are available in the F-Droid market, you can't expect to see those same apps as on Google Play. Although you can side-load some Google Play apps, many applications, such as Google Maps, behave differently without a Google account. Note that this control isn't unique to Google. Apple uses similar code-signing features with similar restrictions on its own phones and app updates.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Without access to these OS updates, Huawei now will have to decide whether to create its own LineageOS-style Android fork or a whole new phone OS of its own. In either case, it will have to abandon the Google Play Store ecosystem and use F-Droid-style app repositories, or if it goes 100% alone, it will need to create a completely new app ecosystem. If its engineers planned for this situation, then they likely are working on this plan right now; otherwise, they are all presumably scrambling to address an event that "should never happen". Here's hoping that if you find yourself in a similar case of vendor lock-in with an overseas company that's too big to fail, you never get caught in the middle of a trade war.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei
|
||||
|
||||
作者:[Kyle Rankin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxjournal.com/users/kyle-rankin
|
||||
[b]: https://github.com/lujun9972
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Colocation facilities buck the cloud-data-center trend)
|
||||
[#]: via: (https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Colocation facilities buck the cloud-data-center trend
|
||||
======
|
||||
Lower prices and latency plus easy access to multiple cloud providers make colocation facilities an attractive option compared to building on-site data centers.
|
||||
![gorodenkoff / Getty Images][1]
|
||||
|
||||
[Data center][2] workloads are moving but not only to the cloud. Increasingly, they are shifting to colocation facilities as an alternative to privately owned data centers.
|
||||
|
||||
### What is colocation?
|
||||
|
||||
A colocation facility or colo is a data center in which a business can rent space for servers and other computing hardware that they purchase but that the colo provider manages.
|
||||
|
||||
[Read about IPv6 and cloud-access security brokers][3]
|
||||
|
||||
The colo company provides the building, cooling, power, bandwidth and physical security. Space is leased by the rack, cabinet, cage or room. Many colos started out as managed services and continue to offer those specialized services.
|
||||
|
||||
Some prominent providers include Equinix, Digital Reality Trust, CenturyLink, and NTT Communications, and there are several Chinese providers that only serve the China market. Unlike the data centers of cloud vendors like Amazon and Microsoft, these colo facilities are generally in large metropolitan areas.
|
||||
|
||||
“Colos have been around a long time, but their initial use case was Web servers,” said Rick Villars, vice president of data centers and cloud research at IDC. “What’s changed now is the ratio of what’s customer-facing is much greater than in 2000, [with the] expansion of companies needing to have more assets that are network-facing.”
|
||||
|
||||
### Advantages of colos: Cost, cloud interconnect
|
||||
|
||||
Homegrown data centers are often sized correctly, with either too much capacity or too little, said Jim Poole, vice president of business development at Equinix. “Customers come to us all the time and say, ‘Would you buy my data center? Because I only use 25 percent of it,’” he said.
|
||||
|
||||
Poole said the average capital expenditure for a stand-alone enterprise data center that is not a part of the corporate campus is $9 million. Companies are increasingly realizing that it makes sense to buy the racks of hardware but place it in someone else’s secure facility that handles the power and cooling. “It’s the same argument for doing cloud computing but at the physical-infrastructure level,” he said.
|
||||
|
||||
Mike Satter, vice president for OceanTech, a data-center-decommissioning service provider, says enterprises should absolutely outsource data-center construction or go the colo route. Just as there are contractors who specialize in building houses, there are experts who specialize in data-center design, he said.
|
||||
|
||||
He added that with many data-center closures there is subsequent consolidation. “For every decommissioning we do, that same company is adding to another environment somewhere else. With the new hardware out there now, the servers can do the same work in 20 racks as they did in 80 racks five years ago. That means a reduced footprint and energy cost,” he said.
|
||||
|
||||
Often these closures mean moving to a colo. OceanTech recently decommissioned a private data center for a major media outlet he declined to identify that involved shutting down a data center in New Jersey that held 70 racks of gear. The firm was going to move its apps to the cloud but ended up expanding to a colo facility in New York City.
|
||||
|
||||
### Cloud isn't cheaper than private data centers
|
||||
|
||||
Satter said he’s had conversations with companies that planned to go to the cloud but changed their minds when they saw what it would cost if they later decided to move workloads out. Cloud providers can “kill you with guidelines and costs” because your data is in their infrastructure, and they can set fees that make it expensive to move it to another provider, he said. “The cloud not a money saver.”
|
||||
|
||||
That can drive decisions to keep data in-house or in a colo in order to keep tighter possession of their data. “Early on, when people weren’t hip to the game for how much it cost to move to the cloud, you had decision makers with influence say the cloud sounded good. Now they are realizing it costs a lot more dollars to do that vs. doing something on-prem, on your own,” said Satter.
|
||||
|
||||
Guy Churchward, CEO of Datera, developer of software designed storage platforms for enterprises, has noticed a new trend among CIOs making a cloud vs. private decision for apps based on the lifespan of the app.
|
||||
|
||||
“Organizations don’t know how much resource they need to throw at a task. The cloud makes more sense for [short-term apps],” he said. For applications that will be used for five years or more, it makes more sense to place them in company-controlled facilities, he said. That's because with three-to-five-year hardware-refresh cycles, the hardware lasts the entire lifespan of the app, and the hardware and app can be retired at the same time.
|
||||
|
||||
Another force driving the decision of private data center vs. the cloud is machine learning. Churchward said that’s because machine learning is often done using large amounts of highly sensitive data, so customers wanted data kept securely in house. They also wanted a low-latency loop between their ML apps and the data lake from which they draw.
|
||||
|
||||
### Colos connect to mulitple cloud providers
|
||||
|
||||
Another allure of colocation providers is that they can act as a pipeline between enterprises and multiple cloud providers. So rather than directly connecting to AWS, Azure, etc., businesses can connect to a colo, and that colo acts like a giant switch, connecting them to cloud providers through dedicated, high-speed networks.
|
||||
|
||||
Villars notes the typical corporate data center is either inside corporate HQ or someplace remote, like South Dakota where land was cheap. But the trade-off is that network connectivity to remote locations is often slower and more expensive.
|
||||
|
||||
That’s where a data-center colo providers with a large footprints come in, since they have points of presence in major cities. No one would fault a New York City-based firm for putting its data center in upstate New York or even further away. But when Equinix, DTR, and others all have data centers right in New York City, customers might get faster and sometimes cheaper connections plus lower latency.
|
||||
|
||||
Steve Cretney, vice president and CIO for food distributor Colony Brands, is in the midst of migrating the company to the cloud and moving everything he can from his data center to AWS. Rather than connect directly to AWS, Colony’s Wisconsin headquarters is connected to an Equinix data center in Chicago.
|
||||
|
||||
Going with Equinix provides more and cheaper bandwidth to the cloud than buying direct connectivity on his own. “I effectively moved my data center into Chicago. Now I can compete with a better price on data communication and networks,” he said.
|
||||
|
||||
Cretney estimates that by moving Colony’s networking from a smaller, local provider to Chicago, the company is seeing an annual cost savings of 50 percent for network connectivity that includes telecommunications.
|
||||
|
||||
Also, Colony wants to adopt a mult-cloud-provider strategy to avoid vendor lock-in, and he gets that by using Equinix as his network connection. As the company eventually uses Microsoft Azure and Google Cloud and other providers, Equinex can provide flexible and economic interconnections, he said.
|
||||
|
||||
### **Colos reduce the need for enterprise data-center real estate**
|
||||
|
||||
In 2014, 80 percent of data-centers were owned by enterprises, while colos and the early cloud accounted for 20 percent, said Villars. Today that’s a 50-50 split, and by 2022-2023, IDC projects service providers will own 70 percent of the large-data-center space.
|
||||
|
||||
For the past five years, the amount of new data-center construction by enterprises has been falling steadily at 5 to 10 percent per year, said Villars. “They are not building new ones because they are coming to the realization that being an expert at data-center construction is not something a company has.”
|
||||
|
||||
Enterprises across many sectors are looking at their data-center environment and leveraging things like virtual machines and SSD, thereby compressing the size of their data centers and getting more work done within smaller physical footprints. “So at some point they ask if they are spending appropriately for this space. That’s when they look at colo,” said Villars.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[3]: https://www.networkworld.com/article/3391380/does-your-cloud-access-security-broker-support-ipv6-it-should.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,79 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Improving IT Operations – Key to Business Success in Digital Transformation)
|
||||
[#]: via: (https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
Improving IT Operations – Key to Business Success in Digital Transformation
|
||||
======
|
||||
|
||||
![Artem Peretiatko][1]
|
||||
|
||||
Forty seven percent of CEOs say they are being “challenged” by their board of directors to show progress in shifting toward a digital business model according to the [Gartner 2018 CIO][2] Agenda Industry Insights Report. By improving IT operations, organizations can progress and even accelerate their digital transformation initiatives efficiently and successfully. The biggest barrier to success is that IT currently spends around 78 percent of their budget and 80 percent of their time just maintaining IT operations, leaving little time and resource left for innovation according to ZK Research[*][3].
|
||||
|
||||
### **Do you cut the operations budget or invest more in transforming operations? **
|
||||
|
||||
The Cisco IT Operations Readiness Index 2018 predicted a dramatic change in IT operations as CIOs embrace analytics and automation. The study reported that 88 percent of respondents identify investing in IT operations as key to driving preemptive practices and enhancing customer experience.
|
||||
|
||||
### What does this have to do with the wide area network?
|
||||
|
||||
According to the IT Operations Readiness Index, 73 percent of respondents will collect WAN operational or performance data and 70 percent will analyze WAN data and leverage the results to further automate network operations. However, security is the most data-driven infrastructure today compared to other IT infrastructure functions (i.e. IoT, IP telephony, network infrastructure, data center infrastructure, WAN connectivity, etc.). The big questions are:
|
||||
|
||||
* How do you collect operations data and what data should you collect?
|
||||
* How do you analyze it?
|
||||
* How do you then automate IT operations based on the results?
|
||||
|
||||
|
||||
|
||||
By no means, is this a simple task. IT departments use a combination of data collected internally and by outside vendors to aggregate information used to transform operations and make better business decisions.
|
||||
|
||||
In a recent [survey][4] by Frost & Sullivan, 94 percent of respondents indicated they will deploy a Software-defined Wide Area Network ([SD-WAN][5]) in the next 24 months. SD-WAN addresses the gap that router-centric WAN architectures were not designed to fill. A business-driven SD-WAN, designed from the ground up to support a cloud-first business model, provides significantly more network and application performance visibility, significantly assisting enterprises to realize the transformational promise of a digital business model. In fact, Gartner indicates that 90 percent of WAN edge decisions will be based on SD-WAN by 2023.
|
||||
|
||||
### How an SD-WAN can improve IT operations leading to successful digital transformation
|
||||
|
||||
All SD-WAN solutions are not created alike. One of the key components that organizations need to consider and evaluate is having complete observability across the network and applications through a single pane of glass. Without visibility, IT risks running inefficient operations that will stifle digital transformation initiatives. This real-time visibility must provide:
|
||||
|
||||
* Operational metrics enabling IT/CIO’s to shift from a reactive toward a predictive practice
|
||||
* A centralized dashboard that allows IT to monitor, in real-time, all aspects of network operations – a dashboard that has flexible knobs to adjust and collect metrics from all WAN edge appliances to accelerate problem resolution
|
||||
|
||||
|
||||
|
||||
The Silver Peak Unity [EdgeConnect™][6] SD-WAN edge platform provides granular visibility into network and application performance. The EdgeConnect platform ensures the highest quality of experience for both end users and IT. End users enjoy always-consistent, always-available application performance including the highest quality of voice and video, even over broadband. Utilizing the [Unity Orchestrator™][7] comprehensive management dashboard as shown below, IT gains complete observability into the performance attributes of the network and applications in real-time. Customizable widgets provide a wealth of operational data including a health heatmap for every SD-WAN appliance deployed, flow counts, active tunnels, logical topologies, top talkers, alarms, bandwidth consumed by each application and location, application latency and jitter and much more. Furthermore, the platform maintains a week’s worth of data with context allowing IT to playback and see what has transpired at a specific time and location, analogous to a DVR.
|
||||
|
||||
By providing complete observability of the entire WAN, IT spends less time troubleshooting network and application bottlenecks and fielding support/help desk calls day and night, and more time focused on strategic business initiatives.
|
||||
|
||||
![][8]
|
||||
|
||||
This solution brief, “[Simplify SD-WAN Operations with Greater Visibility][9]”, provides additional detail on the capabilities offered in the business-driven EdgeConnect SD-WAN edge platform that enables businesses to accelerate their shift toward a digital business model.
|
||||
|
||||
![][10]
|
||||
|
||||
* ZK Research quote from [Cisco IT Operations Readiness Index 2018][11]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/istock-1096811078_1200x800-100801264-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/is-digital-a-priority-for-your-industry/
|
||||
[3]: https://blog.silver-peak.com/improving-it-operations-key-to-business-success-in-digital-transformation#footnote
|
||||
[4]: https://www.silver-peak.com/sd-wan-edge-survey
|
||||
[5]: https://www.silver-peak.com/sd-wan
|
||||
[6]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[7]: https://www.silver-peak.com/products/unity-orchestrator
|
||||
[8]: https://images.idgesg.net/images/article/2019/07/silver-peak-unity-edgeconnect-sdwan-100801265-large.jpg
|
||||
[9]: https://www.silver-peak.com/resource-center/simplify-sd-wan-operations-greater-visibility
|
||||
[10]: https://images.idgesg.net/images/article/2019/07/simplify-sd-wan-operations-with-greater-visibility-100801266-large.jpg
|
||||
[11]: https://s3-us-west-1.amazonaws.com/connectedfutures-prod/wp-content/uploads/2018/11/CF_transforming_IT_operations_report_3-2.pdf
|
@ -1,112 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux a key player in the edge computing revolution)
|
||||
[#]: via: (https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux a key player in the edge computing revolution
|
||||
======
|
||||
Edge computing is augmenting the role that Linux plays in our day-to-day lives. A conversation with Jaromir Coufal from Red Hat helps to define what the edge has become.
|
||||
![Dominic Smith \(CC BY 2.0\)][1]
|
||||
|
||||
In the past few years, [edge computing][2] has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.
|
||||
|
||||
One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.
|
||||
|
||||
**[ Also read: [What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
|
||||
|
||||
Some forms of edge computing include consumer electronics that are used and installed in millions of homes, others that serve tens of thousands of small businesses with operating their facilities, and still others that tie large companies to their remote sites. Key to this elusive definition is the idea that edge computing always involves distributing the workload in such a way that the bulk of the computing work is done remotely from the central core of the business and close to the business problem being addressed.
|
||||
|
||||
Done properly, edge computing can provide services that are both faster and more reliable. Applications running on the edge can be more resilient and run considerably faster because their required data resources are local. In addition, data can be processed or analyzed locally, often requiring only periodic transfer of results to central sites.
|
||||
|
||||
While physical security might be lower at the edge, edge devices often implement security features that allow them to detect 1) manipulation of the device, 2) malicious software, and 3) a physical breach and wipe data.
|
||||
|
||||
### Benefits of edge computing
|
||||
|
||||
Some of the benefits of edge computing include:
|
||||
|
||||
* A quick response to intrusion detection, including the ability for a remote device to detach or self-destruct
|
||||
* The ability to instantly stop communication when needed
|
||||
* Constrained functionality and fewer generic entry points
|
||||
* Rugged and reliable problem resistance
|
||||
* Making the overall computing system harder to attack because computing is distributed
|
||||
* Less data-in-transit exposure
|
||||
|
||||
|
||||
|
||||
Some examples of edge computing devices include those that provide:
|
||||
|
||||
* Video surveillance – watching for activity, reporting only if seen
|
||||
* Controlling autonomous vehicles
|
||||
* Production monitoring and control
|
||||
|
||||
|
||||
|
||||
### Edge computing success story: Chick-fil-A
|
||||
|
||||
One impressive example of highly successful edge computing caught me by surprise. It turns out Chick-fil-A uses edge computing devices to help manage its food preparation services. At Chick-fil-A, edge devices:
|
||||
|
||||
1. Analyze a fryer’s cleaning and cooking
|
||||
2. Aggregate data as a failsafe in case internet connectivity is lost
|
||||
3. Help with decision-making about cooking – how much and how long to cook
|
||||
4. Enhance business operations
|
||||
5. Help automate the complex food cooking and holding decisions so that even newbies get things right
|
||||
6. Function even when the connection with the central site is down
|
||||
|
||||
|
||||
|
||||
As Coufal pointed out, Chick-fil-A runs [Kubernetes][5] at the edge in every one of its restaurants. Their key motivators are low-latency, scale of operations, and continuous business. And it seems to be working extremely well.
|
||||
|
||||
[Chick-fil-A’s hypothesis][6] captures it all: By making smarter kitchen equipment, we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
|
||||
|
||||
### Are you edge-ready?
|
||||
|
||||
There’s no quick answer as to whether your organization is “edge ready.” Many factors determine what kind of services can be deployed on the edge and whether and when those services need to communicate with more central devices. Some of these include:
|
||||
|
||||
* Whether your workload can be functionally distributed
|
||||
* If it’s OK for devices to have infrequent contact with the central services
|
||||
* If devices can work properly when cut off from their connection back to central services
|
||||
* Whether the devices can be secured (e.g., trusted not to provide an entry point)
|
||||
|
||||
|
||||
|
||||
Implementing an edge computing network will likely take a long time from initial planning to implementation. Still, this kind of technology is taking hold and offers some strong advantages. While edge computing initially took hold 15 or more years ago, the last few years have seen renewed interest thanks to tech advances that have enabled new uses.
|
||||
|
||||
Coufal noted that it's been 15 or more years since edge computing concepts and technologies were first introduced, but renewed interest has come about due to tech advances enabling new uses that require this technology.
|
||||
|
||||
**More about edge computing:**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][4]
|
||||
* [Edge computing best practices][7]
|
||||
* [How edge computing can help secure the IoT][8]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/telecom-100801330-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[5]: https://www.infoworld.com/article/3268073/what-is-kubernetes-container-orchestration-explained.html
|
||||
[6]: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-a-7d67242675e2
|
||||
[7]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[8]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,79 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Smarter IoT concepts reveal creaking networks)
|
||||
[#]: via: (https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Smarter IoT concepts reveal creaking networks
|
||||
======
|
||||
Today’s networks don’t meet the needs of emergent internet of things systems. IoT systems need their own modern infrastructure, researchers at the University of Magdeburg say.
|
||||
![Thinkstock][1]
|
||||
|
||||
The internet of things (IoT) needs its own infrastructure ecosystem — one that doesn't use external clouds at all, researchers at the University of Magdeburg say.
|
||||
|
||||
The computer scientists recently obtained funding from the German government to study how to build a future-generation of revolutionary, emergent IoT systems. They say networks must be fault tolerant, secure, and traverse disparate protocols, which they aren't now.
|
||||
|
||||
**[ Read also: [What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3] ]**
|
||||
|
||||
The researchers say a smarter, unique, and organic infrastructure needs to be developed for the IoT and that simply adapting the IoT to traditional networks won't work. They say services must self-organize and function autonomously and that people must accept the fact that we are using the internet in ways never originally intended.
|
||||
|
||||
"The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” the researchers say in their [media release][4]. The internet has centralized security, which causes choke points, and and an inherent lack of dynamic controls, which translates to inflexibility in access rights — all of which make it difficult to adapt the IoT to it.
|
||||
|
||||
Device, data, and process management must be integrated into IoT systems, say the group behind the project, called [DoRIoT][5] (Dynamische Laufzeitumgebung für Organisch (dis-)Aggregierende IoT-Prozesse), translated as Dynamic Runtime Environment for Organic dis-Aggregating IoT Processes.
|
||||
|
||||
“In order to close this gap, concepts [will be] developed in the project that transparently realize the access to the data,” says Professor Sebastian Zug of the University of Freiberg, a partner in DoRIoT. “For the application, it should make no difference whether the specific information requirement is answered by a server or an IoT node.”
|
||||
|
||||
### Extreme edge computing
|
||||
|
||||
In other words, servers and nodes, conceptually, should merge. One could argue it’s a form of extreme [edge computing][6], which is when processing and data storage is taken out of traditional, centralized data center environments and placed close to where the resources are required. It reduces latency, among other advantages.
|
||||
|
||||
DoRIoT may take edge computing one step further. Detecting failures ahead of time and seamless migration of devices are wants, too — services can’t fail just because a new kind of device is introduced.
|
||||
|
||||
“The systems [will] benefit from each other, for example, they can share computing power, data and so on,” says Mesut Güneş of Magdeburg’s [Faculty of Computer Science Institute for Intelligent Cooperating Systems][7].
|
||||
|
||||
“The result is an enormous data pool,” the researchers explain. “Which, in turn, makes it possible to make much more precise statements, for example when predicting climate models, observing traffic flows, or managing large factories in Industry 4.0.”
|
||||
|
||||
[Industry 4.0][8] refers to smart factories that have connected machines autonomously self-managing their own supply chain, production output, and logistics without human intervention.
|
||||
|
||||
Managing risks better than the current internet is one of DoRIoT's goals. The idea is to “guarantee full sovereignty over proprietary data.” To get there, though, one has to eliminate dependency on the cloud and access to data via third parties, they say.
|
||||
|
||||
“This allows companies to be independent of the server infrastructures of external service providers such as Google, Microsoft or Amazon, which are subject to constant changes and even may not be accessible,” they say.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][3]
|
||||
* [Edge computing best practices][9]
|
||||
* [How edge computing can help secure the IoT][10]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/industry_4-0_industrial_iot_internet_of_things_network_thinkstock_613880008-100749946-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[4]: http://www.ovgu.de/en/University/In+Profile/Key+Profile+Areas/Research/Secure+data+protection+in+the+new+internet+of+things.html
|
||||
[5]: http://www.doriot.net/
|
||||
[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[7]: http://iks.cs.ovgu.de/iks/en/ICS.html
|
||||
[8]: https://www.networkworld.com/article/3199671/what-is-industry-4-0.html
|
||||
[9]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[10]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,73 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Server hardware makers shift production out of China)
|
||||
[#]: via: (https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Server hardware makers shift production out of China
|
||||
======
|
||||
Tariffs on Chinese products and unstable U.S./China relations cause server makers to speed up their move out of China.
|
||||
![Etereuti \(CC0\)][1]
|
||||
|
||||
The supply chain of vendors that build servers and network communication devices is accelerating its shift of production out of China to Taiwan and North America, along with other nations not subject to the trade war between the U.S. and China.
|
||||
|
||||
Last May, the Trump Administration levied tariffs on a number of imported Chinese goods, computer components among them. The tariffs ranged from 10-25%. Consumers were hit hardest, since they are more price sensitive than IT buyers. PC World said the [average laptop price could rise by $120][2] just for the tariffs.
|
||||
|
||||
But since the tariff was based on the value of the product, that means server hardware prices could skyrocket, since servers cost much more than PCs.
|
||||
|
||||
**[ Read also: [HPE’s CEO lays out his technology vision][3] ]**
|
||||
|
||||
### Companies that are moving production out of China
|
||||
|
||||
The Taiwanese tech publication DigiTimes reported (article now locked behind a paywall) that Mitac Computing Technology, a server ODM, reactivated an old production line at Hsinchu Science Park (HSP) in Taiwan at the end of 2018 and restarted another for motherboard SMT process in March 2019. The company plans to establish one more SMT production line prior to the end of 2019.
|
||||
|
||||
It went on to say Mitac plans to produce all of its high-end U.S.-bound servers in Taiwan and is looking to move 30% of its overall server production lines back to Taiwan in the next three years.
|
||||
|
||||
Wiwynn, a cloud computing server subsidiary of Wistron, is primarily assembling its U.S.-bound servers in Mexico and has also recently established a production site in southern Taiwan per clients' requests.
|
||||
|
||||
Taiwan-based server chassis and assembly player AIC recently expanded the number of its factories in Taiwan to four and has been aggressively forming cooperation with its partners to expand its capacity. Many Taiwan-based component suppliers are also expanding their capacity in Taiwan.
|
||||
|
||||
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
|
||||
|
||||
Several ODMs, such as Inventec, Wiwynn, Wistron, and Foxconn, all have plants in Mexico, while Quanta Computer has production lines in the U.S. Wiwynn also plans to open manufacturing facilities in eastern U.S.
|
||||
|
||||
“This is not something that just happened overnight, it’s a process that started a few years ago. The tariffs just accelerated the desire of ODMs to do it,” said Ashish Nadkarni, group vice president for infrastructure systems, platforms and technologies at IDC. “Since [President] Trump has come into office there has been saber rattling about China and a trade war. There has also been a focus on margins.”
|
||||
|
||||
He added that component makers are definitely moving out of China to other parts of Asia, like Korea, the Philippines, and Vietnam.
|
||||
|
||||
### HPE, Dell and Lenovo should remain unaffected
|
||||
|
||||
The big three branded server makers are all largely immunized against the tariffs. HP Enterprise, Dell, and Lenovo all have U.S.-based assemblies and their contract manufacturers are in Taiwan, said Nadkarni. So, their costs should remain unaffected by tariffs.
|
||||
|
||||
The tariffs are not affecting sales as much as revenue for hyperscale whitebox vendors is being stressed. Hyperscale companies such as Amazon Web Services (AWS), Microsoft, Google, etc. have contracts with vendors such as Inspur and Super Micro, and if prices fluctuate, that’s not their problem. The hardware vendor is expected to deliver at the agreed cost.
|
||||
|
||||
So margins, already paper thin, can’t be passed on to the customer, unlike the aforementioned laptop example.
|
||||
|
||||
“It’s not the end customers who are affected by it, it’s the vendors who are affected by it. Certain things they can pass on, like component prices. But if the build value goes up, that’s not the customers problem, that’s the vendor’s problem,” said Nadkarni.
|
||||
|
||||
So while it may cost you more to buy a laptop as this trade fracas goes on, it shouldn’t cost more to buy a server.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/asia_china_flag_grunge-stars_pixabay_etereuti-100763424-large.jpg
|
||||
[2]: https://www.pcworld.com/article/3403405/trump-tariffs-on-chinese-goods-could-cost-you-120-more-for-notebook-pcs-say-dell-hp-and-cta.html
|
||||
[3]: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,107 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How edge computing is driving a new era of CDN)
|
||||
[#]: via: (https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
How edge computing is driving a new era of CDN
|
||||
======
|
||||
A CDN is an edge application and an edge application is a superset of what your CDN is doing.
|
||||
![geralt \(CC0\)][1]
|
||||
|
||||
We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.
|
||||
|
||||
The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other [performance-related problems][2] that I wrote about on Network Insight. _[Disclaimer: the author is employed by Network Insight.]_
|
||||
|
||||
Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.
|
||||
|
||||
[RELATED: How Notre Dame is going all in with Amazon’s cloud][3]
|
||||
|
||||
Why? Because all this information cannot be processed by a single cloud or an on-premise location. Latency will always be a problem. For example, in virtual reality (VR) anything over 7 milliseconds will cause motion sickness. When decisions are required to be taken in real-time, you cannot send data to the cloud. You can, however, make use of edge computing and a multi-CDN design.
|
||||
|
||||
### Introducing edge computing and multi-CDN
|
||||
|
||||
The rate of cloud adoption, all-things-video, IoT and edge computing are bringing life back to CDNs and multi-CDN designs. Typically, a multi-CDN is an implementation pattern that includes more than one CDN vendor. The traffic direction is performed by using different metrics, whereby traffic can either be load balanced or failed across the different vendors.
|
||||
|
||||
Edge computing moves actions as close as possible to the source. It is the point where the physical world interacts with the digital world. Logically, the decentralized approach of edge computing will not take over the centralized approach. They will be complementary to each other, so that the application can run at its peak level, depending on its position in the network.
|
||||
|
||||
For example, in IoT, saving battery life is crucial. Let’s assume an IoT device can conduct the transaction in 10ms round trip time (RTT), instead of 100ms RTT. As a result, it can use 10 times less battery.
|
||||
|
||||
### The internet, a performance bottleneck
|
||||
|
||||
The internet is designed on the principle that everyone can talk to everyone, thereby providing universal connectivity whether required or not. There has been a number of design changes with network address translation (NAT) being the biggest. However, essentially the role of the internet has remained the same in terms of connectivity, regardless of location.
|
||||
|
||||
With this type of connectivity model, distance is an important determinant for the application’s performance. Users on the other side of the planet will suffer regardless of buffer sizes or other device optimizations. Long RTT is experienced as packets go back and forth before the actual data transmission. Although caching and traffic redirection is being used but limited success has been achieved so far.
|
||||
|
||||
### The principles of application delivery
|
||||
|
||||
When transmission control protocol (TCP) starts, it thinks it is back in the late 1970s. It assumes that all services are on a local area network (LAN) and there is no packet loss. It then starts to work backward from there. Back when it was designed, we didn't have real-time traffic, such as voice and video that is latency and jitter sensitive.
|
||||
|
||||
Ideally, TCP was designed for the ease of use and reliability, not to boost the performance. You actually need to optimize the TCP stack. And this is why CDNs are very good at performing such tasks. For example, if a connection is received from a mobile phone, a CDN will start with the assumption that there is going to be high jitter and packet loss. This allows them to size the TCP window correctly that accurately match network conditions.
|
||||
|
||||
How do you magnify the performance, what options do you have? In a generic sense, many look to lowering the latency. However, with applications, such as video streaming, latency does not tell you if the video is going to buffer. One can only assume that lower latency will lead to less buffering. In such a scenario, measurement-based on throughput is a far better performance metric since will tell you how fast an object will load.
|
||||
|
||||
We have also to consider the page load times. At the network level, it's the time to first byte (TTFB) and ping. However, these mechanisms don’t tell you much about the user experience as everything fits into one packet. Using ping will not inform you about the bandwidth problems.
|
||||
|
||||
And if a web page goes slower by 25% once packet loss exceeds 5% and you are measuring time to the first byte which is the 4th packet - what exactly can you learn? TTFB is comparable to an internet control message protocol (ICMP) request just one layer up the stack. It's good if something is broken but not if there is underperformance issue.
|
||||
|
||||
When you examine the history of TTFB measuring, you will find that it was deployed due to the lack of Real User Monitoring (RUM) measurements. Previously TTFB was as good in approximating how fast something was going to load, but we don't have to approximate anymore as we can measure it with RUM. RUM is measurements from the end-users. An example could be the metrics generated from a webpage that is being served to an actual user.
|
||||
|
||||
Conclusively, TTFB, ping and page load times are not sophisticated measurements. We should prefer RUM time measurements as much as we can. This provides a more accurate picture of the user experience. This is something which has become critical over the last decade.
|
||||
|
||||
Now we are living in a world of RUM which lets us build our network based on what matters to the business users. All CDNs should aim for RUM measurements. For this, they may need to integrate with traffic management systems that intelligently measure on what the end-user really sees.
|
||||
|
||||
### The need for multi-CDN
|
||||
|
||||
Primarily, the reasons one would opt for a multi-CDN environment are availability and performance. No single CDN can be the fastest to everyone and everywhere in the world. It is impossible due to the internet's connectivity model. However, combining the best of two or even more CDN providers will increase the performance.
|
||||
|
||||
A multi-CDN will give a faster performance and higher availability than what can be achieved with a single CDN. A good design is what runs two availability zones. A better design is what runs two availability zones with a single CDN provider. However, superior design is what runs two availability zones in a multi-CDN environment.
|
||||
|
||||
### Edge applications will be the new norm
|
||||
|
||||
It’s not that long ago that there was a transition from the heavy physical monolithic architecture to the agile cloud. But all that really happened was the transition from the physical appliance to a virtual cloud-based appliance. Maybe now is the time that we should ask, is this the future that we really want?
|
||||
|
||||
One of the main issues in introducing edge applications is the mindset. It is challenging to convince yourself or your peers that the infrastructure you have spent all your time working on and investing in is not the best way forward for your business.
|
||||
|
||||
Although the cloud has created a big buzz, just because you migrate to the cloud does not mean that your applications will run faster. In fact, all you are really doing is abstracting the physical pieces of the architecture and paying someone else to manage it. The cloud has, however, opened the door for the edge application conversation. We have already taken the first step to the cloud and now it's time to make the second move.
|
||||
|
||||
Basically, when you think about edge applications: its simplicity is a programmable CDN. A CDN is an edge application and an edge application is a superset of what your CDN is doing. Edge applications denote cloud computing at the edge. It is a paradigm to distribute the application closer to the source for lower latency, additional resilience, and simplified infrastructure, where you still have control and privacy.
|
||||
|
||||
From an architectural point of view, an edge application provides more resilience than deploying centralized applications. In today's world of high expectations, resilience is a necessity for the continuity of business. Edge applications allow you to collapse the infrastructure into an architecture that is cheaper, simpler and more attentive to the application. The less in the expanse of infrastructure, the more time you can focus on what really matters to your business - the customer.
|
||||
|
||||
### An example of an edge architecture
|
||||
|
||||
An example of edge architecture is within each PoP, every application has its own isolated JavaScript (JS) environment. JavaScript is great for security isolation and the performance guarantees scale. The JavaScript is a dedicated isolated instance that executes the code at the edge.
|
||||
|
||||
Most likely, each JavaScript has its own virtual machine (VM). The sole operation that the VM is performing is the JavaScript runtime engine and the only thing it is running is the customer's code. One could use Google V8 open-source high-performance JavaScript and WebAssembly engine.
|
||||
|
||||
Let’s face it, if you continue building more PoPs, you will hit the law of diminishing returns. When it comes to application such as mobile, you really are maxed out when throwing PoPs to form a solution. So we need to find another solution.
|
||||
|
||||
In the coming times, we are going to witness a trend where most applications will become global, which means edge applications. It certainly makes little sense to place all the application in one location when your users are everywhere else.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network. [Want to Join?][4]**
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/02/network-traffic-100707086-large.jpg
|
||||
[2]: https://network-insight.net/2016/12/buffers-packet-drops/
|
||||
[3]: https://www.networkworld.com/article/3014599/cloud-computing/how-notre-dame-is-going-all-in-with-amazon-s-cloud.html#tk.nww-fsb
|
||||
[4]: https://www.networkworld.com/contributor-network/signup.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,73 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Public internet should be all software-defined)
|
||||
[#]: via: (https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Public internet should be all software-defined
|
||||
======
|
||||
Having a programmable public internet will correct inefficiencies in the current system, engineers at NOIA say.
|
||||
![Thinkstock][1]
|
||||
|
||||
The public internet should migrate to a programmable backbone-as-a-service architecture, says a team of network engineers behind NOIA, a startup promising to revolutionize global traffic. They say the internet will be more efficient if internet protocols and routing technologies are re-worked and then combined with a traffic-trading blockchain.
|
||||
|
||||
It’s “impossible to use internet for modern applications,” the company says on its website. “Almost all global internet companies struggle to ensure uptime and reliable user experience.”
|
||||
|
||||
That’s because modern techniques aren’t being introduced fully, NOIA says. The engineers say algorithms should be implemented to route traffic and that segment routing technology should be adopted. Plus, blockchain should be instigated to trade internet transit capacity. A “programmable internet solves the web’s inefficiencies,” a representative from NOIA told me.
|
||||
|
||||
**[ Read also: [What is IPv6, and why aren’t we there yet?][2] ]**
|
||||
|
||||
### Deprecate the public internet
|
||||
|
||||
NOIA has started introducing a caching, distributed content delivery application to improve website loading times, but it wants to ultimately deprecate the existing internet completely.
|
||||
|
||||
The company currently has 353 active cache nodes around the world, with a total 27 terabytes of storage for that caching system—NOIA clients contribute spare bandwidth and storage. It’s also testing a network backbone using four providers with European and American locations that it says will be the [development environment for its envisaged software-defined and radical internet replacement][3].
|
||||
|
||||
### The problem with today's internet
|
||||
|
||||
The “internet is a mesh of tangled up cables,” [NOIA says][4]. “Thousands of physically connected networks” are involved. Any configuration alterations in any of the jumble of networks causes issues with the protocols, it explains. The company is referring to Border Gateway Protocol (BGP), which lets routers discover paths to IP addresses through the disparate network. Because BGP only forwards to a neighboring router, it doesn’t manage the entire route. That introduces “severe variability” or unreliability.
|
||||
|
||||
“It is impossible to guarantee service reliability without using overlay networks. Low-latency, performance-critical applications, and games cannot operate on public Internet,” the company says.
|
||||
|
||||
### How a software-defined internet works
|
||||
|
||||
NOIA's idea is to use [IPv6][5], the latest internet protocol. IPv6 features an expanded packet size and allows custom headers. The company then adds segment routing to create Segment Routing over IPv6 (SRv6). That SRv6 combo adds routing information to each data packet sent—a packet-level programmable network, in other words.
|
||||
|
||||
Segment routing, roughly, is an updated internet protocol that lets routers comprehend routing information in packet headers and then perform the routing. Cisco has been using it, too.
|
||||
|
||||
NOIA’s network then adds the SRv6 amalgamation to distributed ledger technology (blockchain) in order to let ISPs and data centers buy and sell the routes—buyers can choose their routes in the exchange, too.
|
||||
|
||||
In addition to trade, blockchain introduces security. It's worth noting that routings aren’t the only internet technologies that could be disrupted due to blockchain. In April I wrote about [organizations that propose moving data storage transactions over to distributed ledgers][6]. They say that will be more secure than anything seen before. [Ethernet’s lack of inherent security could be corrected by smart contract, trackable verifiable transactions][7], say some. And, of course, supply chain, the automotive vertical, and the selling of sensor data overall may emerge as [use-contenders for secure, blockchain in the internet of things][8].
|
||||
|
||||
In NOIA’s case, with SRv6 blended with distributed ledgers, the encrypted ledger holds the IP addresses, but it is architecturally decentralized—no one controls it. That’s one element of added security, along with the aforementioned trading, provided by the ledger.
|
||||
|
||||
That trading could handle the question of who’s paying for all this. However, NOIA says current internet hardware will be able to understand the segment routings, so no new equipment investments are needed.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/05/dns_browser_http_web_internet_thinkstock-100758191-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3254575/lan-wan/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
[3]: https://medium.com/noia/development-update-06-20-07-04-2879f9fce3cb
|
||||
[4]: https://noia.network/
|
||||
[5]: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
[6]: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html
|
||||
[7]: https://www.networkworld.com/article/3356496/how-blockchain-will-manage-networks.html
|
||||
[8]: https://www.networkworld.com/article/3330937/how-blockchain-will-transform-the-iot.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,151 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Worst DNS attacks and how to mitigate them)
|
||||
[#]: via: (https://www.networkworld.com/article/3409719/worst-dns-attacks-and-how-to-mitigate-them.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Worst DNS attacks and how to mitigate them
|
||||
======
|
||||
DNS threats, including DNS hijacking, tunneling, phishing, cache poisoning and DDoS attacks, are all on the rise.
|
||||
![Max Bender \(CC0\)][1]
|
||||
|
||||
The Domain Name System remains under constant attack, and there seems to be no end in sight as threats grow increasingly sophisticated.
|
||||
|
||||
DNS, known as the internet’s phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email. While DNS has long been the target of assailants looking to steal all manner of corporate and private information, the threats in the [past year][2] or so indicate a worsening of the situation.
|
||||
|
||||
**More about DNS:**
|
||||
|
||||
* [DNS in the cloud: Why and why not][3]
|
||||
* [DNS over HTTPS seeks to make internet use more private][4]
|
||||
* [How to protect your infrastructure from DNS cache poisoning][5]
|
||||
* [ICANN housecleaning revokes old DNS security key][6]
|
||||
|
||||
|
||||
|
||||
IDC reports that 82% of companies worldwide have faced a DNS attack over the past year. The research firm recently published its fifth annual [Global DNS Threat Report][7], which is based on a survey IDC conducted on behalf of DNS security vendor EfficientIP of 904 organizations across the world during the first half of 2019.
|
||||
|
||||
According to IDC's research, the average costs associated with a DNS attack rose by 49% compared to a year earlier. In the U.S., the average cost of a DNS attack tops out at more than $1.27 million. Almost half of respondents (48%) report losing more than $500,000 to a DNS attack, and nearly 10% say they lost more than $5 million on each breach. In addition, the majority of U.S. organizations say that it took more than one day to resolve a DNS attack.
|
||||
|
||||
“Worryingly, both in-house and cloud applications were damaged, with growth of over 100% for in-house application downtime, making it now the most prevalent damage suffered,” IDC wrote. "DNS attacks are moving away from pure brute-force to more sophisticated attacks acting from the internal network. This will force organizations to use intelligent mitigation tools to cope with insider threats."
|
||||
|
||||
### Sea Turtle DNS hijacking campaign
|
||||
|
||||
An ongoing DNS hijacking campaign known as Sea Turtle is one example of what's occuring in today's DNS threat landscape.
|
||||
|
||||
This month, [Cisco Talos][8] security researchers said the people behind the Sea Turtle campaign have been busy [revamping their attacks][9] with new infrastructure and going after new victims.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
In April, Talos released a [report detailing][11] Sea Turtle and calling it the “first known case of a domain name registry organization that was compromised for cyber espionage operations.” Talos says the ongoing DNS threat campaign is a state-sponsored attack that abuses DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS.
|
||||
|
||||
By obtaining control of victims’ DNS, the attackers can change or falsify any data on the Internet and illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reports.
|
||||
|
||||
The hackers behind Sea Turtle appear to have regrouped after the April report from Talos and are redoubling their efforts with new infrastructure – a move Talos researchers find to be unusual: “While many actors will slow down once they are discovered, this group appears to be unusually brazen, and will be unlikely to be deterred going forward,” Talos [wrote][9] in July.
|
||||
|
||||
“Additionally, we discovered a new DNS hijacking technique that we assess with moderate confidence is connected to the actors behind Sea Turtle. This new technique is similar in that the threat actors compromise the name server records and respond to DNS requests with falsified A records,” Talos stated.
|
||||
|
||||
“This new technique has only been observed in a few highly targeted operations. We also identified a new wave of victims, including a country code top-level domain (ccTLD) registry, which manages the DNS records for every domain [that] uses that particular country code; that access was used to then compromise additional government entities. Unfortunately, unless there are significant changes made to better secure DNS, these sorts of attacks are going to remain prevalent,” Talos wrote.
|
||||
|
||||
### DNSpionage attack upgrades its tools
|
||||
|
||||
Another newer threat to DNS comes in the form of an attack campaign called [DNSpionage][12].
|
||||
|
||||
DNSpionage initially used two malicious websites containing job postings to compromise targets via crafted Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. And the attackers are continuing to develop new assault techniques.
|
||||
|
||||
“The threat actor's ongoing development of DNSpionage malware shows that the attacker continues to find new ways to avoid detection. DNS tunneling is a popular method of exfiltration for some actors, and recent examples of DNSpionage show that we must ensure DNS is monitored as closely as an organization's normal proxy or weblogs,” [Talos wrote][13]. “DNS is essentially the phonebook of the internet, and when it is tampered with, it becomes difficult for anyone to discern whether what they are seeing online is legitimate.”
|
||||
|
||||
The DNSpionage campaign targeted various businesses in the Middle East as well as United Arab Emirates government domains.
|
||||
|
||||
“One of the biggest problems with DNS attacks or the lack of protection from them is complacency,” said Craig Williams, director of Talos outreach. Companies think DNS is stable and that they don’t need to worry about it. “But what we are seeing with attacks like DNSpionage and Sea Turtle are kind of the opposite, because attackers have figured out how to use it to their advantage – how to use it to do damage to credentials in a way, in the case of Sea Turtle, that the victim never even knows it happened. And that’s a real potential problem.”
|
||||
|
||||
If you know, for example, your name server has been compromised, then you can force everyone to change their passwords. But if instead they go after the registrar and the registrar points to the bad guy’s name, you never knew it happened because nothing of yours was touched – that’s why these new threats are so nefarious, Williams said.
|
||||
|
||||
“Once attackers start using it publicly, successfully, other bad guys are going to look at it and say, ‘Hey, why don't I use that to harvest a bunch of credentials from the sites I am interested in,’” Williams said.
|
||||
|
||||
### **The DNS IoT risk**
|
||||
|
||||
Another developing risk would be the proliferation of IoT devices. The Internet Corporation for Assigned Names and Numbers (ICANN) recently wrote a [paper on the risk that IoT brings to DNS][14].
|
||||
|
||||
“The IoT is a risk to the DNS because various measurement studies suggest that IoT devices could stress the DNS infrastructure in ways that we have not seen before,” ICANN stated. “For example, a software update for a popular IP-enabled IoT device that causes the device to use the DNS more frequently (e.g., regularly lookup random domain names to check for network availability) could stress the DNS in individual networks when millions of devices automatically install the update at the same time.”
|
||||
|
||||
While this is a programming error from the perspective of individual devices, it could result in a significant attack vector from the perspective of DNS infrastructure operators. Incidents like this have already occurred on a small scale, but they may occur more frequently in the future due to the growth of heterogeneous IoT devices from manufacturers that equip their IoT devices with controllers that use the DNS, ICANN stated.
|
||||
|
||||
ICANN also suggested that IoT botnets will represent an increased threat to DNS operators. “Larger DDoS attacks, partly because IoT bots are more difficult to eradicate. Current botnet sizes are on the order of hundreds of thousands. The most well-known example is the Mirai botnet, which involved 400K (steady-state) to 600K (peak) infected IoT devices. The Hajime botnet hovers around 400K infected IoT devices, but has not launched any DDoS attacks yet. With the growth of the IoT, these attacks may grow to involve millions of bots and as a result larger DDoS attacks.
|
||||
|
||||
### **DNS security warnings grow**
|
||||
|
||||
The UK's [National Cyber Security Centre (NCSC)][15] issued a warning this month about ongoing DNS attacks, particularly focusing on DNS hijacking. It cited a number of risks associated with the uptick in DNS hijacking including:
|
||||
|
||||
**Creating malicious DNS records.** A malicious DNS record could be used, for example, to create a phishing website that is present within an organization’s familiar domain. This may be used to phish employees or customers.
|
||||
|
||||
**Obtaining SSL certificates.** Domain-validated SSL certificates are issued based on the creation of DNS records; thus an attacker may obtain valid SSL certificates for a domain name, which could be used to create a phishing website intended to look like an authentic website, for example.
|
||||
|
||||
**Transparent proxying.** One serious risk employed recently involves transparently proxying traffic to intercept data. The attacker modifies an organization’s configured domain zone entries (such as “A” or “CNAME” records) to point traffic to their own IP address, which is infrastructure they manage.
|
||||
|
||||
“An organization may lose total control of their domain and often the attackers will change the domain ownership details making it harder to recover,” the NCSC wrote.
|
||||
|
||||
These new threats, as well as other dangers, led the U.S. government to issue a warning earlier this year about DNS attacks on federal agencies.
|
||||
|
||||
The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) told all federal agencies to bolt down their DNS in the face of a series of global hacking campaigns.
|
||||
|
||||
CISA said in its [Emergency Directive][16] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
|
||||
|
||||
CISA says that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
|
||||
|
||||
These actions let the attacker direct user traffic to their own infrastructure for manipulation or inspection before passing it on to the legitimate service, should they choose. This creates a risk that persists beyond the period of traffic redirection, CISA stated.
|
||||
|
||||
“Because the attacker can set DNS record values, they can also obtain valid encryption certificates for an organization’s domain names. This allows the redirected traffic to be decrypted, exposing any user-submitted data. Since the certificate is valid for the domain, end users receive no error warnings,” CISA stated.
|
||||
|
||||
### **Get on the DNSSEC bandwagon**
|
||||
|
||||
“Enterprises that are potential targets – in particular those that capture or expose user and enterprise data through their applications – should heed this advisory by the NSCS and should pressure their DNS and registrar vendors to make DNSSEC and other domain security best practices easy to implement and standardized,” said Kris Beevers, co-founder and CEO of DNS security vendor [NS1][17]. “They can easily implement DNSSEC signing and other domain security best practices with technologies in the market today. At the very least, they should work with their vendors and security teams to audit their implementations.”
|
||||
|
||||
DNSSEC was in the news earlier this year when in response to increased DNS attacks, ICANN called for an intensified community effort to install stronger DNS security technology.
|
||||
|
||||
Specifically, ICANN wants full deployment of the Domain Name System Security Extensions ([DNSSEC][18]) across all unsecured domain names. DNSSEC adds a layer of security on top of DNS. Full deployment of DNSSEC ensures end users are connecting to the actual web site or other service corresponding to a particular domain name, ICANN said. “Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the ‘conversation’, and provide a platform for yet-to-be-developed security improvements,” ICANN stated.
|
||||
|
||||
DNSSEC technologies have been around since about 2010 but are not widely deployed, with less than 20% of the world’s DNS registrars having deployed it, according to the regional internet address registry for the Asia-Pacific region ([APNIC][19]).
|
||||
|
||||
DNSSEC adoption has been lagging because it was viewed as optional and can require a tradeoff between security and functionality, said NS1's Beevers.
|
||||
|
||||
### **Traditional DNS threats**
|
||||
|
||||
While DNS hijacking may be the front line attack method, other more traditional threats still exist.
|
||||
|
||||
The IDC/EfficientIP study found most popular DNS threats have changed compared with last year. Phishing (47%) is now more popular than last year’s favorite, DNS-based malware (39%), followed by DDoS attacks (30%), false positive triggering (26%), and lock-up domain attacks (26%).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3409719/worst-dns-attacks-and-how-to-mitigate-them.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/08/anonymous_faceless_hooded_mand_in_scary_halloween_mask_finger_to_lips_danger_threat_stealth_attack_hacker_hush_silence_warning_by_max_bender_cc0_via_unsplash_1200x800-100766358-large.jpg
|
||||
[2]: https://www.fireeye.com/blog/threat-research/2019/01/global-dns-hijacking-campaign-dns-record-manipulation-at-scale.html
|
||||
[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
|
||||
[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
|
||||
[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
|
||||
[7]: https://www.efficientip.com/resources/idc-dns-threat-report-2019/
|
||||
[8]: https://www.talosintelligence.com/
|
||||
[9]: https://blog.talosintelligence.com/2019/07/sea-turtle-keeps-on-swimming.html
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://blog.talosintelligence.com/2019/04/seaturtle.html
|
||||
[12]: https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html
|
||||
[13]: https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-karkoff.html
|
||||
[14]: https://www.icann.org/en/system/files/files/sac-105-en.pdf
|
||||
[15]: https://www.ncsc.gov.uk/news/ongoing-dns-hijacking-and-mitigation-advice
|
||||
[16]: https://cyber.dhs.gov/ed/19-01/
|
||||
[17]: https://ns1.com/
|
||||
[18]: https://www.icann.org/resources/pages/dnssec-qaa-2014-01-29-en
|
||||
[19]: https://www.apnic.net/
|
@ -1,67 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data centers may soon recycle heat into electricity)
|
||||
[#]: via: (https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Data centers may soon recycle heat into electricity
|
||||
======
|
||||
Rice University researchers are developing a system that converts waste heat into light and then that light into electricity, which could help data centers reduce computing costs.
|
||||
![Gordon Mah Ung / IDG][1]
|
||||
|
||||
Waste heat is the scurge of computing. In fact, much of the cost of powering a computer is from creating unwanted heat. That’s because the inefficiencies in electronic circuits, caused by resistance in the materials, generates that heat. The processors, without computing anything, are essentially converting expensively produced electrical energy into waste energy.
|
||||
|
||||
It’s a fundamental problem, and one that hasn’t been going away. But what if you could convert the unwanted heat back into electricity—recycle the heat back into its original energy form? The data center heat, instead of simply disgorging into the atmosphere to be gotten rid of with dubious eco-effects, could actually run more machines. Plus, your cooling costs would be taken care of—there’s nothing to cool because you’ve already grabbed the hot air.
|
||||
|
||||
**[ Read also: [How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
Scientists at Rice Univeristy are trying to make that a reality by developing heat scavenging and conversion solutions.
|
||||
|
||||
Currently, the most efficient way to convert heat into electricity is through the use of traditional turbines.
|
||||
|
||||
Turbines “can give you nearly 50% conversion efficiency,” says Chloe Doiron, a graduate student at Rice University and co-lead on the project, in a [news article][4] on the school’s website. Turbines convert the kinetic energy of moving fluids, like steam or combustion gases, into mechanical energy. The moving steam then shifts blades mounted on a shaft, which turns a generator, thus creating the power.
|
||||
|
||||
Not a bad solution. The problem, though, is “those systems are not easy to implement,” the researchers explain. The issue is that turbines are full of moving parts, and they’re big, noisy, and messy.
|
||||
|
||||
### Thermal emitter better than turbines for converting heat to energy
|
||||
|
||||
A better option would be a solid-state, thermal device that could absorb heat at the source and simply convert it, perhaps straight into attached batteries.
|
||||
|
||||
The researchers say a thermal emitter could absorb heat, jam it into tight, easy-to-capture bandwidth and then emit it as light. Cunningly, they would then simply turn the light into electricity, as we see all the time now in solar systems.
|
||||
|
||||
“Thermal photons are just photons emitted from a hot body,” says Rice University professor Junichiro Kono in the article. “If you look at something hot with an infrared camera, you see it glow. The camera is capturing these thermally excited photons.” Indeed, all heated surfaces, to some extent, send out light as thermal radiation.
|
||||
|
||||
The Rice team wants to use a film of aligned carbon nanotubes to do the job. The test system will be structured as an actual solar panel. That’s because solar panels, too, lose energy through heat, so are a good environment in which to work. The concept applies to other inefficient technologies, too. “Anything else that loses energy through heat [would become] far more efficient,” the researchers say.
|
||||
|
||||
Around 20% of industrial energy consumption is unwanted heat, Doiron says. That's a lot of wasted energy.
|
||||
|
||||
### Other heat conversion solutions
|
||||
|
||||
Other heat scavenging devices are making inroads, too. Now-commercially available thermoelectric technology can convert a temperature difference into power, also with no moving parts. They function by exposing a specially made material to heat. [Electrons flow when one part is cold and one is hot][5]. And the University of Utah is working on [silicon for chips that generates electricity][6] as one of two wafers heat up.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/flir_20190711t191326-100801627-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://news.rice.edu/2019/07/12/rice-device-channels-heat-into-light/
|
||||
[5]: https://www.networkworld.com/article/2861438/how-to-convert-waste-data-center-heat-into-electricity.html
|
||||
[6]: https://unews.utah.edu/beat-the-heat/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,78 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Reports: As the IoT grows, so do its threats to DNS)
|
||||
[#]: via: (https://www.networkworld.com/article/3411437/reports-as-the-iot-grows-so-do-its-threats-to-dns.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Reports: As the IoT grows, so do its threats to DNS
|
||||
======
|
||||
ICANN and IBM's security researchers separately spell out how the growth of the internet of things will increase opportunities for malicious actors to attack the Domain Name System with hyperscale botnets and worm their malware into the cloud.
|
||||
The internet of things is shaping up to be a more significant threat to the Domain Name System through larger IoT botnets, unintentional adverse effects of IoT-software updates and the continuing development of bot-herding software.
|
||||
|
||||
The Internet Corporation for Assigned Names and Numbers (ICANN) and IBM’s X-Force security researchers have recently issued reports outlining the interplay between DNS and IoT that includes warnings about the pressure IoT botnets will put on the availability of DNS systems.
|
||||
|
||||
**More about DNS:**
|
||||
|
||||
* [DNS in the cloud: Why and why not][1]
|
||||
* [DNS over HTTPS seeks to make internet use more private][2]
|
||||
* [How to protect your infrastructure from DNS cache poisoning][3]
|
||||
* [ICANN housecleaning revokes old DNS security key][4]
|
||||
|
||||
|
||||
|
||||
ICANN’s Security and Stability Advisory Committee (SSAC) wrote in a [report][5] that “a significant number of IoT devices will likely be IP enabled and will use the DNS to locate the remote services they require to perform their functions. As a result, the DNS will continue to play the same crucial role for the IoT that it has for traditional applications that enable human users to interact with services and content,” ICANN stated. “The role of the DNS might become even more crucial from a security and stability perspective with IoT devices interacting with people’s physical environment.”
|
||||
|
||||
IoT represents both an opportunity and a risk to the DNS, ICANN stated. “It is an opportunity because the DNS provides functions and data that can help make the IoT more secure, stable, and transparent, which is critical given the IoT's interaction with the physical world. It is a risk because various measurement studies suggest that IoT devices may stress the DNS, for instance, because of complex DDoS attacks carried out by botnets that grow to hundreds of thousands or in the future millions of infected IoT devices within hours,” ICANN stated.
|
||||
|
||||
Unintentional DDoS attacks
|
||||
|
||||
One risk is that the IoT could place new burdens on the DNS. “For example, a software update for a popular IP-enabled IoT device that causes the device to use the DNS more frequently (e.g., regularly lookup random domain names to check for network availability) could stress the DNS in individual networks when millions of devices automatically install the update at the same time,” ICANN stated.
|
||||
|
||||
While this is a programming error from the perspective of individual devices, it could result in a significant attack vector from the perspective of DNS infrastructure operators. Incidents like this have already occurred on a small scale, but they may occur more frequently in the future due to the growth of heterogeneous IoT devices from manufacturers that equip their IoT devices with controllers that use the DNS, ICANN stated.
|
||||
|
||||
Massively larger botnets, threat to clouds
|
||||
|
||||
The report also suggested that the scale of IoT botnets could grow from hundreds of thousands of devices to millions. The best known IoT botnet is Mirai, responsible for DDoS attacks involving 400,000 to 600,000 devices. The Hajime botnet hovers around 400K infected IoT devices but has not launched any DDoS attacks yet. But as the IoT grows, so will the botnets and as a result larger DDoS attacks.
|
||||
|
||||
Cloud-connected IoT devices could endanger cloud resources. “IoT devices connected to cloud architecture could allow Mirai adversaries to gain access to cloud servers. They could infect a server with additional malware dropped by Mirai or expose all IoT devices connected to the server to further compromise,” wrote Charles DeBeck, a senior cyber threat intelligence strategic analyst with [IBM X-Force Incident Response][6] in a recent report.
|
||||
|
||||
“As organizations increasingly adopt cloud architecture to scale efficiency and productivity, disruption to a cloud environment could be catastrophic.”
|
||||
|
||||
For enterprises that are rapidly adopting both IoT technology and cloud architecture, insufficient security controls could expose the organization to elevated risk, calling for the security committee to conduct an up-to-date risk assessment, DeBeck stated.
|
||||
|
||||
Attackers continue malware development
|
||||
|
||||
“Since this activity is highly automated, there remains a strong possibility of large-scale infection of IoT devices in the future,” DeBeck stated. “Additionally, threat actors are continuing to expand their targets to include new types of IoT devices and may start looking at industrial IoT devices or connected wearables to increase their footprint and profits.”
|
||||
|
||||
Botnet bad guys are also developing new Mirai variants and IoT botnet malware outside of the Mirai family to target IoT devices, DeBeck stated.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][7]
|
||||
|
||||
[Learn More][8] Existing Users [Sign In][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3411437/reports-as-the-iot-grows-so-do-its-threats-to-dns.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
|
||||
[2]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[3]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
|
||||
[4]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
|
||||
[5]: https://www.icann.org/en/system/files/files/sac-105-en.pdf
|
||||
[6]: https://securityintelligence.com/posts/i-cant-believe-mirais-tracking-the-infamous-iot-malware-2/?cm_mmc=OSocial_Twitter-_-Security_Security+Brand+and+Outcomes-_-WW_WW-_-SI+TW+blog&cm_mmca1=000034XK&cm_mmca2=10009814&linkId=70790642
|
||||
[7]: javascript://
|
||||
[8]: https://www.networkworld.com/learn-about-insider/
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user