清除过期文章

This commit is contained in:
Xingyu Wang 2022-01-13 09:20:59 +08:00
parent bcb688f94d
commit efd98eb084
556 changed files with 0 additions and 73914 deletions

View File

@ -1,113 +0,0 @@
The Lineage of Man
======
Ive always found man pages fascinating. Formatted as strangely as they are and accessible primarily through the terminal, they have always felt to me like relics of an ancient past. Some man pages probably are ancient: Id love to know how many times the man page for `cat` or say `tee` has been revised since the early days of Unix, but Im willing to bet its not many. Man pages are mysterious—its not obvious where they come from, where they live on your computer, or what kind of file they might be stored in—and yet its hard to believe that something so fundamental and so obviously governed by rigid conventions could remain so inscrutable. Where did the man page conventions come from? Where are they codified? If I wanted to write my own man page, where would I even begin?
The story of `man` is inextricably tied to the story of Unix. The very first version of Unix, completed in 1971 but only available internally at Bell Labs, did not provide a `man` command. But Douglas McIlroy, who at the time was head of the Computing Techniques Research Department and managed the Unix project, insisted that some kind of documentation be made available. He pushed Ken Thompson and Dennis Ritchie, the two programmers commonly credited with creating Unix, to write some. The result was the [first edition][1] of the Unix Programmers Manual.
The first edition of the Unix Programmers Manual consisted of (physical!) pages collected together in a single binder. It documented only 61 different commands, along with a couple dozen system calls and a few library routines. Though the `man` command itself was not to come until later, the first edition of the Unix Programmers Manual established many of the conventions that man pages adhere to today, even in the absence of an official specification. The documentation for each command included the well-known NAME, SYNOPSIS, DESCRIPTION, and SEE ALSO headings. Optional flags were enclosed in square brackets and meta-arguments (for example, “file” where a file path is expected) were underlined. The manual also established the canonical manual sections such as Section 1 for General Commands, Section 2 for System Calls, and so on; these sections were, at the time, simply sections of a very long printed document. Thompson and Ritchie could not have known that they were establishing a tradition that would survive for decades and decades, but that is what they did.
McIlroy later speculated about why the man page format has survived as long as it has. In a technical report about the conceptual development of Unix, he noted that the original man pages were written in a “terse, yet informal, prose style” that together with the alphabetical ordering of information “encouraged accurate on-line documentation.” In a nod to an experience with man pages that all programmers have had at one time or another, he added that the man page format “was popular with initiates who needed to look up facts, albeit sometimes frustrating for beginners who didnt know what facts to look for.” McIlroy was highlighting the sometimes-overlooked distinction between tutorial and reference; man pages may not be much use for the former, but they are perfect for the latter.
The `man` command was a part of Unix by the time the [second edition][2] of the Unix Programmers Manual was printed. It made the entire manual available “on-line”, meaning interactively, which was seen as enormously useful. The `man` command has its own manual page in the second edition (this page is the original `man man`), which explains that `man` can be used to “run off a particular section of this manual.” Among the original Unix wizards, the term “run off” referred to the physical act of printing a document but also to the program they used to typeset documents, `roff`. The `roff` program had been used to typeset both the first and second editions of the Unix Programmers Manual before they were printed, but it was now also used by `man` to process man pages before they were displayed. The man pages themselves were stored on every Unix system in a file format meant to be read by `roff`.
`roff` was the first in a long lineage of typesetting programs that have always been used to format man pages. Its own development can be traced back to a program called `RUNOFF` that was written in the mid-60s. At Bell Labs, `roff` spawned several successors including `nroff` (en-roff) and `troff` (tee-roff). `nroff` was designed to improve on `roff` and better output text to terminals, while `troff` tackled the problem of printing using a CAT phototypesetter. (If you dont know what phototypesetting is, as I did not, I refer you to [this][3] eminently watchable film.) All of these programs were based on a kind of markup language consisting of two-letter commands inserted at the beginning of every line in a document. These commands could control such things as font size, text positioning, line spacing, and so on. Today, the most common implementation of the `roff` system is `groff`, a part of the GNU project.
Its easy to get a sense of what `roff` input files look like by just taking a gander at some of the man pages stored on your own computer. At least on a BSD-derived system like MacOS, you can use the `--path` argument to `man` to find out where the man page for a particular command is stored. Typically this will be under `/usr/share/man` or `/usr/local/share/man`. Using `man` this way, you can find the path for the `man` man page itself and then open it in a text editor. It will not look anything like what youre used to looking at with `man`. On my system, the first couple dozen lines are:
```
.TH man 1 "September 19, 2005"
.LO 1
.SH NAME
man \- format and display the on-line manual pages
.SH SYNOPSIS
.B man
.RB [ \-acdfFhkKtwW ]
.RB [ --path ]
.RB [ \-m
.IR system ]
.RB [ \-p
.IR string ]
.RB [ \-C
.IR config_file ]
.RB [ \-M
.IR pathlist ]
.RB [ \-P
.IR pager ]
.RB [ \-B
.IR browser ]
.RB [ \-H
.IR htmlpager ]
.RB [ \-S
.IR section_list ]
.RI [ section ]
.I "name ..."
.SH DESCRIPTION
.B man
formats and displays the on-line manual pages. If you specify
.IR section ,
.B man
only looks in that section of the manual.
.I name
is normally the name of the manual page, which is typically the name
of a command, function, or file.
However, if
.I name
contains a slash
.RB ( / )
then
.B man
interprets it as a file specification, so that you can do
.B "man ./foo.5"
or even
.B "man /cd/foo/bar.1.gz\fR.\fP"
.PP
See below for a description of where
.B man
looks for the manual page files.
```
You can make out, for example, that all of the section headings are preceded by `.SH`, and everything that would appear in bold is preceded by `.B`. These commands are `roff` macros specifically designed for writing man pages. The macros used here are part of a package called `man`, but there are other packages such as `mdoc` that you can use for the same purpose. The macros make writing man pages much simpler than it would otherwise be. They also enforce consistency by always compiling down to the same set of lower-level `roff` commands. The `man` and `mdoc` packages are now documented under [GROFF_MAN(7)][4] and [GROFF_MDOC(7)][5] respectively.
The entire `roff` system is reminiscent of LaTeX, another typesetting tool that today enjoys much more popularity. LaTeX is essentially a big bucket of macros built on top of the core TeX system designed by Donald Knuth. Like with `roff`, there are many other macro packages that you can incorporate into your LaTeX documents. These macro packages mean that you almost never have to write raw TeX yourself. LaTeX has superseded `roff` in many domains, but it is poorly suited to formatting text for a terminal, so nobody uses it to write man pages.
If you were to write a man page today, in 2017, how would you go about it? You certainly could write a man page using a `roff` macro package like `man` or `mdoc`. The syntax is unfamiliar and unwieldy. But the macros abstract away so much of the complexity that you can write a reasonably complete man page without learning very many commands. That said, there are now other options worth considering.
[Pandoc][6] is a widely used software tool for converting documents from one format to another. You can use Pandoc to convert Markdown files into `man`-macro-based man pages, meaning that you can now write your man pages in something as straightforward as Markdown. Pandoc supports many more Markdown constructs than most Markdown converters, giving you lots of ways to format your man page. While this convenience comes at the cost of some control, its unlikely that you will ever need something that would warrant dropping down to the `roff` macro level. If youre curious about what these Markdown files might look like, Ive written [a few of my own][7] to document a tool I created for keeping notes on how to use different command-line utilities. NPMs [documentation][8] is also written in Markdown and converted to a `roff` man format later, though they use a JavaScript package called [marked-man][9] instead of Pandoc to do the conversion.
So there are now plenty of ways to write man pages, giving you lots of freedom to choose the tool you think best. That said, youd better ensure that your man page reads like every other man page that has ever been written. Even though there is now so much flexibility in tooling, the man page conventions are as strong as ever. And you might be tempted to skip writing a man page altogether—after all, you probably have documentation on the web, or maybe you just want to rely on the `--help` flag—but youre forgoing the patina of respectability a man page can provide. The man page is an institution that doesnt seem likely to disappear or evolve soon, which is fascinating, because there are so many ways in which we could do man pages better. XML didnt come off well in my [last post][10], but it would be the perfect format here, and it would allow us to do something like query `man` about an option:
```
$ man grep -v
Selected lines are those not matching any of the specified patterns.
```
Imagine that! But it seems that were all too used to man pages the way they are. In a field where rapid change is the norm, maybe some stability—particularly in a documentation system we all turn to in moments of ignorance and confusion—is a good thing.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][11] on Twitter or subscribe to the [RSS feed][12] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/09/28/the-lineage-of-man.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.bell-labs.com/usr/dmr/www/1stEdman.html
[2]: http://bitsavers.informatik.uni-stuttgart.de/pdf/att/unix/2nd_Edition/UNIX_Programmers_Manual_2ed_Jun72.pdf
[3]: https://vimeo.com/127605644
[4]: http://man7.org/linux/man-pages/man7/groff_man.7.html
[5]: http://man7.org/linux/man-pages/man7/groff_mdoc.7.html
[6]: http://pandoc.org/
[7]: https://github.com/sinclairtarget/um/tree/02365bd0c0a229efb936b3d6234294e512e8a218/doc
[8]: https://github.com/npm/npm/blob/20589f4b028d3e8a617800ac6289d27f39e548e8/doc/cli/npm.md
[9]: https://www.npmjs.com/package/marked-man
[10]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
[11]: https://twitter.com/TwoBitHistory
[12]: https://twobithistory.org/feed.xml

View File

@ -1,50 +0,0 @@
The Most Important Database You've Never Heard of
======
In 1962, JFK challenged Americans to send a man to the moon by the end of the decade, inspiring a heroic engineering effort that culminated in Neil Armstrongs first steps on the lunar surface. Many of the fruits of this engineering effort were highly visible and sexy—there were new spacecraft, new spacesuits, and moon buggies. But the Apollo Program was so staggeringly complex that new technologies had to be invented even to do the mundane things. One of these technologies was IBMs Information Management System (IMS).
IMS is a database management system. NASA needed one in order to keep track of all the parts that went into building a Saturn V rocket, which—because there were two million of them—was expected to be a challenge. Databases were a new idea in the 1960s and there werent any already available for NASA to use, so, in 1965, NASA asked IBM to work with North American Aviation and Caterpillar Tractor to create one. By 1968, IBM had installed a working version of IMS at NASA, though at the time it was called ICS/DL/I for “Informational Control System and Data Language/Interface.” (IBM seems to have gone through a brief, unfortunate infatuation with the slash; see [PL/I][1].) Two years later, IBM rebranded ICS/DL/I as “IMS” and began selling it to other customers. It was one of the first commercially available database management systems.
The incredible thing about IMS is that it is still in use today. And not just on a small scale: Banks, insurance companies, hospitals, and government agencies still use IMS for all sorts of critical tasks. Over 95% of Fortune 1000 companies use IMS in some capacity, as do all of the top five US banks. Whenever you withdraw cash from an ATM, the odds are exceedingly good that you are interacting with IMS at some point in the course of your transaction. In a world where the relational database is an old workhorse increasingly in competition with trendy new NoSQL databases, IMS is a freaking dinosaur. It is a relic from an era before the relational database was even invented, which didnt happen until 1970. And yet it seems to be the database system in charge of all the important stuff.
I think this makes IMS pretty interesting. Depending on how you feel about relational databases, it either offers insight into how the relational model improved on its predecessors or else exemplifies an alternative model better suited to certain problems.
IMS works according to a hierarchical model, meaning that, instead of thinking about data as tables that can be brought together using JOIN operations, IMS thinks about data as trees. Each kind of record you store can have other kinds of records as children; these child record types represent additional information that you might be interested in given a record of the parent type.
To take an example, say that you want to store information about bank customers. You might have one type of record to represent customers and another type of record to represent accounts. Like in a relational database, where each table has columns, these records will have different fields; we might want to have a first name field, a last name field, and a city field for each customer. We must then decide whether we are likely to first lookup a customer and then information about that customers account, or whether we are likely to first lookup an account and then information about that accounts owner. Assuming we decide that we will access customers first, then we will make our account record type a child of our customer record type. Diagrammed, our database model would look something like this:
![][2]
And an actual database might look like:
![][3]
By modeling our data this way, we are hewing close to the reality of how our data is stored. Each parent record includes pointers to its children, meaning that moving down our tree from the root node is efficient. (Actually, each parent basically stores just one pointer to the first of its children. The children in turn contain pointers to their siblings. This ensures that the size of a record does not vary with the number of children it has.) This efficiency can make data accesses very fast, provided that we are accessing our data in ways that we anticipated when we first structured our database. According to IBM, an IMS instance can process over 100,000 transactions a second, which is probably a large part of why IMS is still used, particularly at banks. But the downside is that we have lost a lot of flexibility. If we want to access our data in ways we did not anticipate, we will have a hard time.
To illustrate this, consider what might happen if we decide that we would like to access accounts before customers. Perhaps customers are calling in to update their addresses, and we would like them to uniquely identify themselves using their account numbers. So we want to use an account number to find an account, and then from there find the accounts owner. But since all accesses start at the root of our tree, theres no way for us to get to an account efficiently without first deciding on a customer. To fix this problem, we could introduce a second tree or hierarchy starting with account records; these account records would then have customer records as children. This would let us access accounts and then customers efficiently. But it would involve duplicating information that we already have stored in our database—we would have two trees storing the same information in different orders. Another option would be to establish an index of accounts that could point us to the right account record given an account number. That would work too, but it would entail extra work during insert and update operations in the future.
It was precisely this inflexibility and the problem of duplicated information that pushed E. F. Codd to propose the relational model. In his 1970 paper, A Relational Model of Data for Large Shared Data Banks, he states at the outset that he intends to present a model for data storage that can protect users from having to know anything about how their data is stored. Looked at one way, the hierarchical model is entirely an artifact of how the designers of IMS chose to store data. It is a bottom-up model, the implication of a physical reality. The relational model, on the other hand, is an abstract model based on relational algebra, and is top-down in that the data storage scheme can be anything provided it accommodates the model. The relational models great advantage is that, just because youve made decisions that have caused the database to store your data in a particular way, you wont find yourself effectively unable to make certain queries.
All that said, the relational model is an abstraction, and we all know abstractions arent free. Banks and large institutions have stuck with IMS partly because of the performance benefits, though its hard to say if those benefits would be enough to keep them from switching to a modern database if they werent also trying to avoid rewriting mission-critical legacy code. However, todays popular NoSQL databases demonstrate that there are people willing to drop the conveniences of the relational model in return for better performance. Something like MongoDB, which encourages its users to store data in a denormalized form, isnt all that different from IMS. If you choose to store some entity inside of another JSON record, then in effect you have created something like the IMS hierarchy, and you have constrained your ability to query for that data in the future. But perhaps thats a tradeoff youre willing to make. So, even if IMS hadnt predated E. F. Codds relational model by several years, there are still reasons why IMS creators might not have adopted the relational model wholesale.
Unfortunately, IMS isnt something that you can download and take for a spin on your own computer. First of all, IMS is not free, so you would have to buy it from IBM. But the bigger problem is that IMS only runs on IBM mainframes like the IBM z13. Thats a shame, because it would be a joy to play around with IMS and get a sense for exactly how it differs from something like MySQL. But even without that opportunity, its interesting to think about software systems that work in ways we dont expect or arent used to. And its especially interesting when those systems, alien as they are, turn out to undergird your local hospital, the entire financial sector, and even the federal government.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/10/07/the-most-important-database.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/PL/I
[2]: https://twobithistory.org/images/hierarchical-model.png
[3]: https://twobithistory.org/images/hierarchical-db.png
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -1,110 +0,0 @@
The politics of the Linux desktop
============================================================
### If you're working in open source, why would you use anything but Linux as your main desktop?
![The politics of the Linux desktop](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_networks.png?itok=XasNXxKs "The politics of the Linux desktop")
Image by : opensource.com
At some point in 1997 or 1998—history does not record exactly when—I made the leap from Windows to the Linux desktop. I went through quite a few distributions, from Red Hat to SUSE to Slackware, then Debian, Debian Experimental, and (for a long time thereafter) Ubuntu. When I accepted a role at Red Hat, I moved to Fedora, and migrated both my kids (then 9 and 11) to Fedora as well.
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Download Now: Linux commands cheat sheet][3]
* [Advanced Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
For a few years, I kept Windows as a dual-boot option, and then realised that, if I was going to commit to Linux, then I ought to go for it properly. In losing Windows, I didn't miss much; there were a few games that I couldn't play, but it was around the time that the Civilization franchise was embracing Linux, so that kept me happy.
The move to Linux wasn't plain sailing, by any stretch of the imagination. If you wanted to use fairly new hardware in the early days, you had to first ensure that there were  _any_  drivers for Linux, then learn how to compile and install them. If they were not quite my friends, **lsmod** and **modprobe** became at least close companions. I taught myself to compile a kernel and tweak the options to make use of (sometimes disastrous) new, "EXPERIMENTAL" features as they came out. Early on, I learned the lesson that you should always keep at least one kernel in your [LILO][12] list that you were  _sure_  booted fully. I cursed NVidia and grew horrified by SCSI. I flirted with early journalling filesystem options and tried to work out whether the different preempt parameters made any noticeable difference to my user experience or not. I began to accept that printers would never print—and then they started to. I discovered that the Bluetooth stack suddenly started to connect to things.
Over the years, using Linux moved from being an uphill struggle to something that just worked. I moved my mother-in-law and then my father over to Linux so I could help administer their machines. And then I moved them off Linux so they could no longer ask me to help administer their machines.
Over the years, using Linux moved from being an uphill struggle to something that just worked.It wasn't just at home, either: I decided that I would use Linux as my desktop for work, as well. I even made it a condition of employment for at least one role. Linux desktop support in the workplace caused different sets of problems. The first was the "well, you're on your own: we're not going to support you" email from IT support. VPNs were touch and go, but in the end, usually go.
The biggest hurdle was Microsoft Office, until I discovered [CrossOver][13], which I bought with my own money, and which allowed me to run company-issued copies of Word, PowerPoint, and the rest on my Linux desktop. Fonts were sometimes a problem, and one company I worked for required Microsoft Lync. For this, and for a few other applications, I would sometimes have to run a Windows virtual machine (VM) on my Linux desktop.  Was this a cop out?  Well, a little bit: but I've always tried to restrict my usage of this approach to the bare minimum.
### But why?
"Why?" colleagues would ask. "Why do you bother? Why not just run Windows?"
"Because I enjoy pain," was usually my initial answer, and then the more honest, "because of the principle of the thing."
So this is it: I believe in open source. We have a number of very, very good desktop-compatible distributions these days, and most of the time they just work. If you use well-known or supported hardware, they're likely to "just work" pretty much as well as the two obvious alternatives, Windows or Mac. And they just work because many people have put much time into using them, testing them, and improving them. So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider  _not_  using Linux? If, as I do, you believe in open source, and particularly if you work within the open source community or are employed by an open source organisation, I struggle to see why you would even consider not using Linux.
So it's not a case of why wouldn't I use Windows or Mac, but why would I ever consider not using Linux?I've spoken to people about this (of course I have), and here are the most common reasons—or excuses—I've heard.
1. I'm more productive on Windows/Mac.
2. I can't use app X on Linux, and I need it for my job.
3. I can't game on Linux.
4. It's what our customers use, so why we would alienate them?
5. "Open" means choice, and I prefer a proprietary desktop, so I use that.
Interestingly, I don't hear "Linux isn't good enough" much anymore, because it's manifestly untrue, and I can show that my own experience—and that of many colleagues—belies that.
### Rebuttals
If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications.Let's go through those answers and rebut them.
1. **I'm more productive on Windows/Mac.** I'm sure you are. Anyone is more productive when they're using a platform or a system they're used to. If you believe in open source, then I contest that you should take the time to learn how to use a Linux desktop and the associated applications. If you're working for an open source organisation, they'll probably help you along, and you're unlikely to find you're much less productive in the long term. And, you know what? If you are less productive in the long term, then get in touch with the maintainers of the apps that are causing you to be less productive and help improve them. You don't have to be a coder. You could submit bug reports, suggest improvements, write documentation, or just test the most recent versions of the software. And then you're helping yourself and the rest of the community. Welcome to open source.
1. **I can't use app X on Linux, and I need it for my job.** This may be true. But it's probably less true than you think. The people most often saying this with conviction are audio, video, or graphics experts. It was certainly the case for many years that Linux lagged behind in those areas, but have a look and see what the other options are. And try them, even if they're not perfect, and see how you can improve them. Alternatively, use a VM for that particular app.
1. **I can't game on Linux.** Well, you probably can, but not all the games that you enjoy. This, to be clear, shouldn't really be an excuse not to use Linux for most of what you do. It might be a reason to keep a dual-boot system or to do what I did (after much soul-searching) and buy a games console (because Elite Dangerous really  _doesn't_  work on Linux, more's the pity). It should also be an excuse to lobby for your favourite games to be ported to Linux.
1. **It's what our customers use, so why would we alienate them?** I don't get this one. Does Microsoft ban visitors with Macs from their buildings? Does Apple ban Windows users? Does Google allow non-Android phones through their doors? You don't kowtow to the majority when you're the little guy or gal; if you're working in open source, surely you should be proud of that. You're not going to alienate your customer—you're really not.
1. **"Open" means choice, and I prefer a proprietary desktop, so I use that.**Being open certainly does mean you have a choice. You made that choice by working in open source. For many, including me, that's a moral and philosophical choice. Saying you embrace open source, but rejecting it in practice seems mealy mouthed, even insulting. Using openness to justify your choice is the wrong approach. Saying "I prefer a proprietary desktop, and company policy allows me to do so" is better. I don't agree with your decision, but at least you're not using the principle of openness to justify it.
Is using open source easy? Not always. But it's getting easier. I think that we should stand up for what we believe in, and if you're reading [Opensource.com][14], then you probably believe in open source. And that, I believe, means that you should run Linux as your main desktop.
_Note: I welcome comments, and would love to hear different points of view. I would ask that comments don't just list application X or application Y as not working on Linux. I concede that not all apps do. I'm more interested in justifications that I haven't covered above, or (perceived) flaws in my argument. Oh, and support for it, of course._
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/2017-05-10_0129.jpg?itok=Uh-eKFhx)][15]
Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: [not always easy][7]...  I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat.  I have a blog - "[Alice, Eve & Bob][8]" - where I write (sometimes rather parenthetically) about security.  I live in the UK and... [more about Mike Bursell][9][More about me][10]
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/politics-linux-desktop
作者:[Mike Bursell ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://opensource.com/article/17/11/politics-linux-desktop?rate=do69ixoNzK0yg3jzFk0bc6ZOBsIUcqTYv6FwqaVvzUA
[7]:https://opensource.com/article/17/11/politics-linux-desktop
[8]:https://aliceevebob.com/
[9]:https://opensource.com/users/mikecamel
[10]:https://opensource.com/users/mikecamel
[11]:https://opensource.com/user/105961/feed
[12]:https://en.wikipedia.org/wiki/LILO_(boot_loader)
[13]:https://en.wikipedia.org/wiki/CrossOver_(software)
[14]:https://opensource.com/
[15]:https://opensource.com/users/mikecamel
[16]:https://opensource.com/users/mikecamel
[17]:https://opensource.com/users/mikecamel
[18]:https://opensource.com/article/17/11/politics-linux-desktop#comments
[19]:https://opensource.com/tags/linux

View File

@ -1,44 +0,0 @@
Important Papers: Codd and the Relational Model
======
Its hard to believe today, but the relational database was once the cool new kid on the block. In 2017, the relational model competes with all sorts of cutting-edge NoSQL technologies that make relational database systems seem old-fashioned and boring. Yet, 50 years ago, none of the dominant database systems were relational. Nobody had thought to structure their data that way. When the relational model did come along, it was a radical new idea that revolutionized the database world and spawned a multi-billion dollar industry.
The relational model was introduced in 1970. Edgar F. Codd, a researcher at IBM, published a [paper][1] called “A Relational Model of Data for Large Shared Data Banks.” The paper was a rewrite of a paper he had circulated internally at IBM a year earlier. The paper is unassuming; Codd does not announce in his abstract that he has discovered a brilliant new approach to storing data. He only claims to have employed a novel tool (the mathematical notion of a “relation”) to address some of the inadequacies of the prevailing database models.
In 1970, there were two schools of thought about how to structure a database: the hierarchical model and the network model. The hierarchical model was used by IBMs Information Management System (IMS), the dominant database system at the time. The network model had been specified by a standards committee called CODASYL (which also—random tidbit—specified COBOL) and implemented by several other database system vendors. The two models were not really that different; both could be called “navigational” models. They persisted tree or graph data structures to disk using pointers to preserve the links between the data. Retrieving a record stored toward the bottom of the tree would involve first navigating through all of its ancestor records. These databases were fast (IMS is still used by many financial institutions partly for this reason, see [this excellent blog post][2]) but inflexible. Woe unto those database administrators who suddenly found themselves needing to query records from the bottom of the tree without having an obvious place to start at the top.
Codd saw this inflexibility as a symptom of a larger problem. Programs using a hierarchical or network database had to know about how the stored data was structured. Programs had to know this because they were responsible for navigating down this structure to find the information they needed. This was so true that when Charles Bachman, a major pioneer of the network model, received a Turing Award for his work in 1973, he gave a speech titled “[The Programmer as Navigator][3].” Of course, if programs were saddled with this responsibility, then they would immediately break if the structure of the database ever changed. In the introduction to his 1970 paper, Codd motivates the search for a better model by arguing that we need “data independence,” which he defines as “the independence of application programs and terminal activities from growth in data types and changes in data representation.” The relational model, he argues, “appears to be superior in several respects to the graph or network model presently in vogue,” partly because, among other benefits, the relational model “provides a means of describing data with its natural structure only.” By this he meant that programs could safely ignore any artificial structures (like trees) imposed upon the data for storage and retrieval purposes only.
To further illustrate the problem with the navigational models, Codd devotes the first section of his paper to an example data set involving machine parts and assembly projects. This dataset, he says, could be represented in existing systems in at least five different ways. Any program that is developed assuming one of five structures will fail when run against at least three of the other structures. The program could instead try to figure out ahead of time which of the structures it might be dealing with, but it would be difficult to do so in this specific case and practically impossible in the general case. So, as long as the program needs to know about how the data is structured, we cannot switch to an alternative structure without breaking the program. This is a real bummer because (and this is from the abstract) “changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.”
Codd then introduces his relational model. This model would be refined and expanded in subsequent papers: In 1971, Codd wrote about ALPHA, a SQL-like query language he created; in another 1971 paper, he introduced the first three normal forms we know and love today; and in 1972, he further developed relational algebra and relational calculus, the mathematically rigorous underpinnings of the relational model. But Codds 1970 paper contains the kernel of the relational idea:
> The term relation is used here in its accepted mathematical sense. Given sets (not necessarily distinct), is a relation on these sets if it is a set of -tuples each of which has its first element from , its second element from , and so on. We shall refer to as the th domain of . As defined above, is said to have degree . Relations of degree 1 are often called unary, degree 2 binary, degree 3 ternary, and degree n-ary.
Today, we call a relation a table, and a domain an attribute or a column. The word “table” actually appears nowhere in the paper, though Codds visual representations of relations (which he calls “arrays”) do resemble tables. Codd defines several more terms, some of which we continue to use and others we have replaced. He explains primary and foreign keys, as well as what he calls the “active domain,” which is the set of all distinct values that actually appear in a given domain or column. He then spends some time distinguishing between a “simple” and a “nonsimple” domain. A simple domain contains “atomic” or “nondecomposable” values, like integers. A nonsimple domain has relations as elements. The example Codd gives here is that of an employee with a salary history. The salary history is not one salary but a collection of salaries each associated with a date. So a salary history cannot be represented by a single number or string.
Its not obvious how one could store a nonsimple domain in a multi-dimensional array, AKA a table. The temptation might be to denote the nonsimple relationship using some kind of pointer, but then we would be repeating the mistakes of the navigational models. Instead. Codd introduces normalization, which at least in the 1970 paper involves nothing more than turning nonsimple domains into simple ones. This is done by expanding the child relation so that it includes the primary key of the parent. Each tuple of the child relation references its parent using simple domains, eliminating the need for a nonsimple domain in the parent. Normalization means no pointers, sidestepping all the problems they cause in the navigational models.
At this point, anyone reading Codds paper would have several questions, such as “Okay, how would I actually query such a system?” Codd mentions the possibility of creating a universal sublanguage for querying relational databases from other programs, but declines to define such a language in this particular paper. He does explain, in mathematical terms, many of the fundamental operations such a language would have to support, like joins, “projection” (`SELECT` in SQL), and “restriction” (`WHERE`). The amazing thing about Codds 1970 paper is that, really, all the ideas are there—weve been writing `SELECT` statements and joins for almost half a century now.
Codd wraps up the paper by discussing ways in which a normalized relational database, on top of its other benefits, can reduce redundancy and improve consistency in data storage. Altogether, the paper is only 11 pages long and not that difficult of a read. I encourage you to look through it yourself. It would be another ten years before Codds ideas were properly implemented in a functioning system, but, when they finally were, those systems were so obviously better than previous systems that they took the world by storm.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/12/29/codd-relational-model.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://cs.uwaterloo.ca/~david/cs848s14/codd-relational.pdf
[2]: https://twobithistory.org/2017/10/07/the-most-important-database.html
[3]: https://pdfs.semanticscholar.org/f371/d196bf0e7b43df6dcbbc44de461925a21709.pdf
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -1,94 +0,0 @@
6 pivotal moments in open source history
============================================================
### Here's how open source developed from a printer jam solution at MIT to a major development model in the tech industry today.
![6 pivotal moments in open source history](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/welcome-open-sign-door-osdc-lead.png?itok=i9jCnaiu "6 pivotal moments in open source history")
Image credits : [Alan Levine][4]. [CC0 1.0][5]
Open source has taken a prominent role in the IT industry today. It is everywhere from the smallest embedded systems to the biggest supercomputer, from the phone in your pocket to the software running the websites and infrastructure of the companies we engage with every day. Let's explore how we got here and discuss key moments from the past 40 years that have paved a path to the current day.
### 1\. RMS and the printer
In the late 1970s, [Richard M. Stallman (RMS)][6] was a staff programmer at MIT. His department, like those at many universities at the time, shared a PDP-10 computer and a single printer. One problem they encountered was that paper would regularly jam in the printer, causing a string of print jobs to pile up in a queue until someone fixed the jam. To get around this problem, the MIT staff came up with a nice social hack: They wrote code for the printer driver so that when it jammed, a message would be sent to everyone who was currently waiting for a print job: "The printer is jammed, please fix it." This way, it was never stuck for long.
In 1980, the lab accepted a donation of a brand-new laser printer. When Stallman asked for the source code for the printer driver, however, so he could reimplement the social hack to have the system notify users on a paper jam, he was told that this was proprietary information. He heard of a researcher in a different university who had the source code for a research project, and when the opportunity arose, he asked this colleague to share it—and was shocked when they refused. They had signed an NDA, which Stallman took as a betrayal of the hacker culture.
The late '70s and early '80s represented an era where software, which had traditionally been given away with the hardware in source code form, was seen to be valuable. Increasingly, MIT researchers were starting software companies, and selling licenses to the software was key to their business models. NDAs and proprietary software licenses became the norms, and the best programmers were hired from universities like MIT to work on private development projects where they could no longer share or collaborate.
As a reaction to this, Stallman resolved that he would create a complete operating system that would not deprive users of the freedom to understand how it worked, and would allow them to make changes if they wished. It was the birth of the free software movement.
### 2\. Creation of GNU and the advent of free software
By late 1983, Stallman was ready to announce his project and recruit supporters and helpers. In September 1983, [he announced the creation of the GNU project][7] (GNU stands for GNU's Not Unix—a recursive acronym). The goal of the project was to clone the Unix operating system to create a system that would give complete freedom to users.
In January 1984, he started working full-time on the project, first creating a compiler system (GCC) and various operating system utilities. Early in 1985, he published "[The GNU Manifesto][8]," which was a call to arms for programmers to join the effort, and launched the Free Software Foundation in order to accept donations to support the work. This document is the founding charter of the free software movement.
### 3\. The writing of the GPL
Until 1989, software written and released by the [Free Software Foundation][9] and RMS did not have a single license. Emacs was released under the Emacs license, GCC was released under the GCC license, and so on; however, after a company called Unipress forced Stallman to stop distributing copies of an Emacs implementation they had acquired from James Gosling (of Java fame), he felt that a license to secure user freedoms was important.
The first version of the GNU General Public License was released in 1989, and it encapsulated the values of copyleft (a play on words—what is the opposite of copyright?): You may use, copy, distribute, and modify the software covered by the license, but if you make changes, you must share the modified source code alongside the modified binaries. This simple requirement to share modified software, in combination with the advent of the internet in the 1990s, is what enabled the decentralized, collaborative development model of the free software movement to flourish.
### 4\. "The Cathedral and the Bazaar"
By the mid-1990s, Linux was starting to take off, and free software had become more mainstream—or perhaps "less fringe" would be more accurate. The Linux kernel was being developed in a way that was completely different to anything people had been seen before, and was very successful doing it. Out of the chaos of the kernel community came order, and a fast-moving project.
In 1997, Eric S. Raymond published the seminal essay, "[The Cathedral and the Bazaar][10]," comparing and contrasting the development methodologies and social structure of GCC and the Linux kernel and talking about his own experiences with a "bazaar" development model with the Fetchmail project. Many of the principles that Raymond describes in this essay will later become central to agile development and the DevOps movement—"release early, release often," refactoring of code, and treating users as co-developers are all fundamental to modern software development.
This essay has been credited with bringing free software to a broader audience, and with convincing executives at software companies at the time that releasing their software under a free software license was the right thing to do. Raymond went on to be instrumental in the coining of the term "open source" and the creation of the Open Source Institute.
"The Cathedral and the Bazaar" was credited as a key document in the 1998 release of the source code for the Netscape web browser Mozilla. At the time, this was the first major release of an existing, widely used piece of desktop software as free software, which brought it further into the public eye.
### 5\. Open source
As far back as 1985, the ambiguous nature of the word "free", used to describe software freedom, was identified as problematic by RMS himself. In the GNU Manifesto, he identified "give away" and "for free" as terms that confused zero price and user freedom. "Free as in freedom," "Speech not beer," and similar mantras were common when free software hit a mainstream audience in the late 1990s, but a number of prominent community figures argued that a term was needed that made the concept more accessible to the general public.
After Netscape released the source code for Mozilla in 1998 (see #4), a group of people, including Eric Raymond, Bruce Perens, Michael Tiemann, Jon "Maddog" Hall, and many of the leading lights of the free software world, gathered in Palo Alto to discuss an alternative term. The term "open source" was [coined by Christine Peterson][11] to describe free software, and the Open Source Institute was later founded by Bruce Perens and Eric Raymond. The fundamental difference with proprietary software, they argued, was the availability of the source code, and so this was what should be put forward first in the branding.
Later that year, at a summit organized by Tim O'Reilly, an extended group of some of the most influential people in the free software world at the time gathered to debate various new brands for free software. In the end, "open source" edged out "sourceware," and open source began to be adopted by many projects in the community.
There was some disagreement, however. Richard Stallman and the Free Software Foundation continued to champion the term "free software," because to them, the fundamental difference with proprietary software was user freedom, and the availability of source code was just a means to that end. Stallman argued that removing the focus on freedom would lead to a future where source code would be available, but the user of the software would not be able to avail of the freedom to modify the software. With the advent of web-deployed software-as-a-service and open source firmware embedded in devices, the battle continues to be waged today.
### 6\. Corporate investment in open source—VA Linux, Red Hat, IBM
In the late 1990s, a series of high-profile events led to a huge increase in the professionalization of free and open source software. Among these, the highest-profile events were the IPOs of VA Linux and Red Hat in 1999\. Both companies had massive gains in share price on their opening days as publicly traded companies, proving that open source was now going commercial and mainstream.
Also in 1999, IBM announced that they were supporting Linux by investing $1 billion in its development, making is less risky to traditional enterprise users. The following year, Sun Microsystems released the source code to its cross-platform office suite, StarOffice, and created the [OpenOffice.org][12] project.
The combined effect of massive Silicon Valley funding of open source projects, the attention of Wall Street for young companies built around open source software, and the market credibility that tech giants like IBM and Sun Microsystems brought had combined to create the massive adoption of open source, and the embrace of the open development model that helped it thrive have led to the dominance of Linux and open source in the tech industry today.
_Which pivotal moments would you add to the list? Let us know in the comments._
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-11423-8ecef7f357341aaa7aee8b43e9b530c9.png?itok=n1snBFq3)][13] Dave Neary - Dave Neary is a member of the Open Source and Standards team at Red Hat, helping make Open Source projects important to Red Hat be successful. Dave has been around the free and open source software world, wearing many different hats, since sending his first patch to the GIMP in 1999.[More about me][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/pivotal-moments-history-open-source
作者:[Dave Neary ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dneary
[1]:https://opensource.com/article/18/2/pivotal-moments-history-open-source?rate=gsG-JrjfROWACP7i9KUoqmH14JDff8-31C2IlNPPyu8
[2]:https://opensource.com/users/dneary
[3]:https://opensource.com/user/16681/feed
[4]:https://www.flickr.com/photos/cogdog/6476689463/in/photolist-aSjJ8H-qHAvo4-54QttY-ofm5ZJ-9NnUjX-tFxS7Y-bPPjtH-hPYow-bCndCk-6NpFvF-5yQ1xv-7EWMXZ-48RAjB-5EzYo3-qAFAdk-9gGty4-a2BBgY-bJsTcF-pWXATc-6EBTmq-SkBnSJ-57QJco-ddn815-cqt5qG-ddmYSc-pkYxRz-awf3n2-Rvnoxa-iEMfeG-bVfq5-jXy74D-meCC1v-qx22rx-fMScsJ-ci1435-ie8P5-oUSXhp-xJSm9-bHgApk-mX7ggz-bpsxd7-8ukud7-aEDmBj-qWkytq-ofwhdM-b7zSeD-ddn5G7-ddn5gb-qCxnB2-S74vsk
[5]:https://creativecommons.org/publicdomain/zero/1.0/
[6]:https://en.wikipedia.org/wiki/Richard_Stallman
[7]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
[8]:https://www.gnu.org/gnu/manifesto.en.html
[9]:https://www.fsf.org/
[10]:https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
[11]:https://opensource.com/article/18/2/coining-term-open-source-software
[12]:http://www.openoffice.org/
[13]:https://opensource.com/users/dneary
[14]:https://opensource.com/users/dneary
[15]:https://opensource.com/users/dneary
[16]:https://opensource.com/article/18/2/pivotal-moments-history-open-source#comments
[17]:https://opensource.com/tags/licensing

View File

@ -1,138 +0,0 @@
What Did Ada Lovelace's Program Actually Do?
======
The story of Microsofts founding is one of the most famous episodes in computing history. In 1975, Paul Allen flew out to Albuquerque to demonstrate the BASIC interpreter that he and Bill Gates had written for the Altair microcomputer. Because neither of them had a working Altair, Allen and Gates tested their interpreter using an emulator that they wrote and ran on Harvards computer system. The emulator was based on nothing more than the published specifications for the Intel 8080 processor. When Allen finally ran their interpreter on a real Altair—in front of the person he and Gates hoped would buy their software—he had no idea if it would work. But it did. The next month, Allen and Gates officially founded their new company.
Over a century before Allen and Gates wrote their BASIC interpreter, Ada Lovelace wrote and published a computer program. She, too, wrote a program for a computer that had only been described to her. But her program, unlike the Microsoft BASIC interpreter, was never run, because the computer she was targeting was never built.
Lovelaces program is often called the worlds first computer program. Not everyone agrees that it should be called that. Lovelaces legacy, it turns out, is one of computing historys most hotly debated subjects. Walter Isaacson has written that the dispute about the extent and merit of her contributions constitutes a “minor academic specialty.” Inevitably, the fact that Lovelace was a woman has made this dispute a charged one. Historians have cited all kinds of primary evidence to argue that the credit given to Lovelace is either appropriate or undeserved. But they seem to spend less time explaining the technical details of her published writing, which is unfortunate, because the technical details are the most fascinating part of the story. Who wouldnt want to know exactly how a program written in 1843 was supposed to work?
In fairness, Lovelaces program is not easy to explain to the layperson without some hand-waving. Its the intricacies of her program, though, that make it so remarkable. Whether or not she ought to be known as “the first programmer,” her program was specified with a degree of rigor that far surpassed anything that came before. She thought carefully about how operations could be organized into groups that could be repeated, thereby inventing the loop. She realized how important it was to track the state of variables as they changed, introducing a notation to illustrate those changes. As a programmer myself, Im startled to see how much of what Lovelace was doing resembles the experience of writing software today.
So lets take a closer look at Lovelaces program. She designed it to calculate the Bernoulli numbers. To understand what those are, we have to go back a couple millennia to the genesis of one of mathematics oldest problems.
### Sums of Powers
The Pythagoreans lived on the shores of the Mediterranean and worshiped numbers. One of their pastimes was making triangles out of pebbles.
![][1]
One pebble followed by a row of two pebbles makes a triangle containing three pebbles. Add another row of three pebbles and you get a triangle containing six pebbles. You can continue like this, each time adding a row with one more pebble in it than the previous row. A triangle with six rows contains 21 pebbles. But how many pebbles does a triangle with 423 rows contain?
What the Pythagoreans were looking for was a way to calculate the following without doing all the addition:
They eventually realized that, if you place two triangles of the same size up against each other so that they form a rectangle, you can find the area of the rectangle and divide by two to get the number of pebbles in each of the triangles:
![][2]
Archimedes later explored a similar problem. He was interested in the following series:
You might visualize this series by imagining a stack of progressively larger squares (made out of tiny cubes), one on top of the other, forming a pyramid. Archimedes wanted to know if there was an easy way to tell how many cubes would be needed to construct a pyramid with, say, 423 levels. He recorded a solution that also permits a geometrical interpretation.
Three pyramids can be fit together to form a rectangular prism with a tiny, one-cube-high extrusion at one end. That little extrusion happens to be a triangle that obeys the same rules that the Pythagoreans used to make their pebble triangles. ([This video][3] might be a more helpful explanation of what I mean.) So the volume of the whole shape is given by the following equation:
By substituting the Pythagorean equation for the sum of the first n integers and doing some algebra, you get this:
In 499, the Indian mathematician and astronomer, Aryabhata, published a work known as the Aryabhatiya, which included a formula for calculating the sum of cubes:
A formula for the sum of the first n positive integers raised to the fourth power wasnt published for another 500 years.
You might be wondering at this point if there is a general method for finding the sum of the first n integers raised to the kth power. Mathematicians were wondering too. Johann Faulhaber, a German mathematician and slightly kooky numerologist, was able to calculate formulas for sums of integers up to the 17th power, which he published in 1631. But this may have taken him years and he did not state a general solution. Blaise Pascal finally outlined a general method in 1665, though it depended on first knowing how to calculate the sum of integers raised to every lesser power. To calculate the sum of the first n positive integers raised to the sixth power, for example, you would first have to know how to calculate the sum of the first n positive integers raised to the fifth power.
A more practical general solution was stated in the posthumously published work of Swiss mathematician Jakob Bernoulli, who died in 1705. Bernoulli began by deriving the formulas for calculating the sums of the first n positive integers to the first, second, and third powers. These he gave in polynomial form, so they looked like the below:
Using Pascals Triangle, Bernoulli realized that these polynomials followed a predictable pattern. Essentially, Bernoulli broke the coefficients of each term down into two factors, one of which he could determine using Pascals Triangle and the other which he could derive from the interesting property that all the coefficients in the polynomial seemed to always add to one. Figuring out the exponent that should be attached to each term was no problem, because that also followed a predictable pattern. The factor of each coefficient that had to be calculated using the sums-to-one rule formed a sequence that became known as the Bernoulli numbers.
Bernoullis discovery did not mean that it was now trivial to calculate the sum of the first positive n integers to any given power. In order to calculate the sum of the first positive n integers raised to the kth power, you would need to know every Bernoulli number up to the kth Bernoulli number. Each Bernoulli number could only be calculated if the previous Bernoulli numbers were known. But calculating a long series of Bernoulli numbers was significantly easier than deriving each sum of powers formula in turn, so Bernoullis discovery was a big advance for mathematics.
### Babbage
Charles Babbage was born in 1791, nearly a century after Bernoulli died. Ive always had some vague idea that Babbage designed but did not build a mechanical computer. But Ive never entirely understood how that computer was supposed to work. The basic ideas, as it happens, are not that difficult to grasp, which is good news. Lovelaces program was designed to run on one of Babbages machines, so we need to take another quick detour here to talk about how those machines worked.
Babbage designed two separate mechanical computing machines. His first machine was called the Difference Engine. Before the invention of the pocket calculator, people relied on logarithmic tables to calculate the product of large numbers. (There is a good [Numberphile video][4] on how this was done.) Large logarithmic tables are not difficult to create, at least conceptually, but the sheer number of calculations that need to be done in order to create them meant that in Babbages time they often contained errors. Babbage, frustrated by this, sought to create a machine that could tabulate logarithms mechanically and therefore without error.
The Difference Engine was not a computer, because all it did was add and subtract. It took advantage of a method devised by the French mathematician Gaspard de Prony that broke the process of tabulating logarithms down into small steps. These small steps involved only addition and subtraction, meaning that a small army of people without any special mathematical aptitude or training could be employed to produce a table. De Pronys method, known as the method of divided differences, could be used to tabulate any polynomial. Polynomials, in turn, could be used to approximate logarithmic and trigonometric functions.
To get a sense of how this process worked, consider the following simple polynomial function:
The method of divided differences involves finding the difference between each successive value of y for different values of x. The differences between these differences are then found, and possibly the differences between those next differences themselves, until a constant difference appears. These differences can then be used to get the next value of the polynomial simply by adding.
Because the above polynomial is only a second-degree polynomial, we are able to find the constant difference after only two columns of differences:
x y Diff 1 Diff 2 1 2 2 5 3 3 10 5 2 4 17 7 2 5 ? ? 2 … … … …
Now, since we know that the constant difference is 2, we can find the value of y when x is 5 through addition only. If we add 2 to 7, the last entry in the “Diff 1” column, we get 9. If we add 9 to 17, the last entry in the y column, we get 26, our answer.
Babbages Difference Engine had, for each difference column in a table like the one above, a physical column of gears. Each gear was a decimal digit and one whole column was a decimal number. The Difference Engine had eight columns of gears, so it could tabulate a polynomial up to the seventh degree. The columns were initially set with values matching an early row in the difference table, worked out ahead of time. A human operator would then turn a crank shaft, causing the constant difference to propagate through the machine as the value stored on each column was added to the next.
Babbage was able to build a small section of the Difference Engine and use it to demonstrate his ideas at parties. But even after spending an amount of public money equal to the cost of two large warships, he never built the entire machine. Babbage could not find anyone in the early 1800s that could make the number of gears he needed with sufficient accuracy. A working Difference Engine would not be built until the 1990s, after the advent of precision machining. There is [a great video on YouTube][5] demonstrating a working Difference Engine on loan to the Computer History Museum in Mountain View, which is worth watching even just to listen to the marvelous sounds the machine makes while it runs.
Babbage eventually lost interest in the Difference Engine when he realized that a much more powerful and flexible machine could be built. His Analytical Engine was the machine that we know today as Babbages mechanical computer. The Analytical Engine was based on the same columns of gears used in the Difference Engine, but whereas the Difference Engine only had eight columns, the Analytical Engine was supposed to have many hundreds more. The Analytical Engine could be programmed using punched cards like a Jacquard Loom and could multiply and divide as well as add and subtract. In order to perform one of these operations, a section of the machine called the “mill” would rearrange itself into the appropriate configuration, read the operands off of other columns used for data storage, and then write the result back to another column.
Babbage called his new machine the Analytical Engine because it was powerful enough to do something resembling mathematical analysis. The Difference Engine could tabulate a polynomial, but the Analytical Engine would be able to calculate, for example, the coefficients of the polynomial expansion of another expression. It was an amazing machine, but the British government wisely declined to fund its construction. So Babbage went abroad to Italy to try to drum up support for his idea.
### Notes by The Translator
In Turin, Babbage met Italian engineer and future prime minister Luigi Menabrea. He persuaded Menabrea to write an outline of what the Analytical Engine could accomplish. In 1842, Menabrea published a paper on the topic in French. The following year, Lovelace published a translation of Menabreas paper into English.
Lovelace, then known as Ada Byron, first met Babbage at a party in 1833, when she was 17 and he was 41. Lovelace was fascinated with Babbages Difference Engine. She could also understand how it worked, because she had been extensively tutored in mathematics throughout her childhood. Her mother, Annabella Milbanke, had decided that a solid grounding in mathematics would ward off the wild, romantic sensibility that possessed Lovelaces father, Lord Byron, the famous poet. After meeting in 1833, Lovelace and Babbage remained a part of the same social circle and wrote to each other frequently.
Ada Byron married William King in 1835. King later became the Earl of Lovelace, making Ada the Countess of Lovelace. Even after having three children, she continued her education in mathematics, employing Augustus de Morgan, who discovered De Morgans laws, as her tutor. Lovelace saw the potential of Babbages Analytical Machine immediately and was eager to work with him to promote the idea. A friend suggested that she translate Menabreas paper for an English audience.
Menabreas paper gave a brief overview of how the Difference Engine worked, then showed how the Analytical Engine would be a far superior machine. The Analytical Engine would be so powerful that it could “form the product of two numbers, each containing twenty figures, in three minutes” (emphasis in the original). Menabrea gave further examples of the machines capabilities, demonstrating how it could solve a simple system of linear equations and expand the product of two binomial expressions. In both cases, Menabrea provided what Lovelace called “diagrams of development,” which listed the sequence of operations that would need to be performed to calculate the correct answer. These were programs in the same sense that Lovelaces own program was a program and they were originally published the year before. But as we will see, Menabreas programs were only simple examples of what was possible. All of them were trivial in the sense that they did not require any kind of branching or looping.
Lovelace appended a series of notes to her translation of Menabreas paper that together ran much longer than the original work. It was here that she made her major contributions to computing. In Note A, which Lovelace attached to Menabreas initial description of the Analytical Engine, Lovelace explained at some length and often in lyrical language the promise of a machine that could perform arbitrary mathematical operations. She foresaw that a machine like the Analytical Engine wasnt just limited to numbers and could in fact act on any objects “whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine.” She added that the machine might one day, for example, compose music. This insight was all the more remarkable given that Menabrea saw the Analytical Engine primarily as a tool for automating “long and arid computation,” which would free up the intellectual capacities of brilliant scientists for more advanced thinking. The miraculous foresight that Lovelace demonstrated in Note A is one major reason that she is celebrated today.
The other famous note is Note G. Lovelace begins Note G by arguing that, despite its impressive powers, the Analytical Machine cannot really be said to “think.” This part of Note G is what Alan Turing would later refer to as “Lady Lovelaces Objection.” Nevertheless, Lovelace continues, the machine can do extraordinary things. To illustrate its ability to handle even more complex problems, Lovelace provides her program calculating the Bernoulli numbers.
The full program, in the expanded “diagram of development” format that Lovelace explains in Note D, can be seen [here][6]. The program is essentially a list of operations, specified using the usual mathematical symbols. It doesnt appear that Babbage or Lovelace got as far as developing anything like a set of op codes for the Analytical Engine.
Though Lovelace was describing a method for computing the entire sequence of Bernoulli numbers up to some limit, the program she provided only illustrated one step of that process. Her program calculated a number that she called B7, which modern mathematicians know as the eighth Bernoulli number. Her program thus sought to solve the following equation:
In the above, each term represents a coefficient in the polynomial formula for the sum of integers to a particular power. Here that power is eight, since the eighth Bernoulli number first appears in the formula for the sum of positive integers to the eighth power. The B and A numbers represent the two kinds of factors that Bernoulli discovered. B1 through B7 are all different Bernoulli numbers, indexed according to Lovelaces indexing. A0 through A5 represent the factors of the coefficients that Bernoulli could calculate using Pascals Triangle. The values of A0, A1, A3, and A5 appear below. Here n represents the index of the Bernoulli number in the sequence of odd-numbered Bernoulli numbers starting with the first. Lovelaces program used n = 4.
Ive created a [translation][7] of Lovelaces program into C, which may be easier to follow. Lovelaces program first calculates A0 and the product B1A1. It then enters a loop that repeats twice to calculate B3A3 and B5A5, since those are formed according to an identical pattern. After each product is calculated, it is added with all the previous products, so that by the end of the program the full sum has been obtained.
Obviously the C translation is not an exact recreation of Lovelaces program. It declares variables on the stack, for example, whereas Lovelaces variables were more like registers. But it makes obvious the parts of Lovelaces program that were so prescient. The C program contains two `while` loops, one nested inside the other. Lovelaces program did not have `while` loops exactly, but she made groups of operations and in the text of her note specified when they should repeat. The variable `v10`, in the original program and in the C translation, functions as a counter variable that decrements with each loop, a construct any programmer would be familiar with. In fact, aside from the profusion of variables with unhelpful names, the C translation of Lovelaces program doesnt look that alien at all.
The other thing worth mentioning quickly is that translating Lovelaces program into C was not that difficult, thanks to the detail present in her diagram. Unlike Menabreas tables, her table includes a column labeled “Indication of change in the value on any Variable,” which makes it much easier to follow the mutation of state throughout the program. She adds a superscript index here to each variable to indicate the successive values they hold. A superscript of two, for example, means that the value being used here is the second value that has been assigned to the variable since the beginning of the program.
### The First Programmer?
After I had translated Lovelaces program into C, I was able to run it on my own computer. To my frustration, I kept getting the wrong result. After some debugging, I finally realized that the problem wasnt the code that I had written. The bug was in the original!
In her “diagram of development,” Lovelace gives the fourth operation as `v5 / v4`. But the correct ordering here is `v4 / v5`. This may well have been a typesetting error and not an error in the program that Lovelace devised. All the same, this must be the oldest bug in computing. I marveled that, for ten minutes or so, unknowingly, I had wrestled with this first ever bug.
Jim Randall, another blogger that has [translated Lovelaces program into Python][8], has noted this division bug and two other issues. What does it say about Ada Lovelace that her published program contains minor bugs? Perhaps it shows that she was attempting to write not just a demonstration but a real program. After all, can you really be writing anything more than toy programs if you arent also writing lots of bugs?
One Wikipedia article calls Lovelace the first to publish a “complex program.” Maybe thats the right way to think about Lovelace accomplishment. Menabrea published “diagrams of development” in his paper a year before Lovelace published her translation. Babbage also wrote more than twenty programs that he never published. So its not quite accurate to say that Lovelace wrote or published the first program, though theres always room to quibble about what exactly constitutes a “program.” Even so, Lovelaces program was miles ahead of anything else that had been published before. The longest program that Menabrea presented was 11 operations long and contained no loops or branches; Lovelaces program contains 25 operations and a nested loop (and thus branching). Menabrea wrote the following toward the end of his paper:
> When once the engine shall have been constructed, the difficulty will be reduced to the making of the cards; but as these are merely the translation of algebraic formulae, it will, by means of some simple notation, be easy to consign the execution of them to a workman.
Neither Babbage nor Menabrea were especially interested in applying the Analytical Engine to problems beyond the immediate mathematical challenges that first drove Babbage to construct calculating machines. Lovelace saw that the Analytical Engine was capable of much more than Babbage or Menabrea could imagine. Lovelace also grasped that “the making of the cards” would not be a mere afterthought and that it could be done well or done poorly. This is hard to appreciate without understanding her program from Note G and seeing for oneself the care she put into designing it. But having done that, you might agree that Lovelace, even if she was not the very first programmer, was the first programmer to deserve the title.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][9] on Twitter or subscribe to the [RSS feed][10] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/08/18/ada-lovelace-note-g.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/triangular_numbers1.png
[2]: https://twobithistory.org/images/triangular_numbers2.png
[3]: https://www.youtube.com/watch?v=aXbT37IlyZQ
[4]: https://youtu.be/VRzH4xB0GdM
[5]: https://www.youtube.com/watch?v=BlbQsKpq3Ak
[6]: https://upload.wikimedia.org/wikipedia/commons/c/cf/Diagram_for_the_computation_of_Bernoulli_numbers.jpg
[7]: https://gist.github.com/sinclairtarget/ad18ac65d277e453da5f479d6ccfc20e
[8]: https://enigmaticcode.wordpress.com/tag/bernoulli-numbers/
[9]: https://twitter.com/TwoBitHistory
[10]: https://twobithistory.org/feed.xml

View File

@ -1,124 +0,0 @@
3 scary sysadmin stories
======
Terrifying ghosts are hanging around every data center, just waiting to haunt the unsuspecting sysadmin.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spooky_halloween_haunted_house.jpg?itok=UkRBeItZ)
> "It's all just a bunch of hocus pocus!" — Max in [Hocus Pocus][1]
Over my many years as a system administrator, I've heard many horror stories about the different ghosts that have haunted new admins due to their inexperience.
Here are three of the stories that stand out to me the most in helping build my character as a good sysadmin.
### The ghost of the failed restore
In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online.
But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change.
With great fear, I asked the senior sysadmin what to do to fix this behavior.
"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin.
"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered.
"Exactly! But you preferred to leave early without finishing that task," he said.
"Oh my! I thought it was optional!" I exclaimed.
"It was, it was…"
**Moral of the story:** Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time.
### The dark window
Once upon a night watch, reflecting I was, lonely and tired,
Looking at the file window on my screen.
Clicking randomly, nearly napping, suddenly came a beeping
From some server, sounding gently, sounding on my pager.
"It's just a warning," I muttered, "sounding on my pager—
Only this and nothing more."
Soon again I heard a beeping somewhat louder than before.
Opening my pager with great disdain,
There was the message from a server of the saintly days of yore:
"The legacy application, it's down, doesn't respond," and nothing more.
There were many stories of this server,
Incredibly, almost terrified,
I went down to the data center to review it.
I sat engaged in guessing, what would be the console to restart it
Without keyboard, mouse, or monitor?
"The task level up"—I think—"only this and nothing more."
Then, thinking, "In another rack, I saw a similar server,
I'll take its monitor and keyboard, nothing bad."
Suddenly, this server shut down, and my pager beeped again:
"The legacy application, it's down, doesn't respond", and nothing more.
Bemused, I sat down to call my sysadmin mentor:
"I wanted to use the console of another server, and now both are out."
"Did you follow my advice? Don't use the graphics console, the terminal is better."
Of course, I remember, it was last December;
I felt fear, a horror that I had never felt before;
"It is a tool of the past and nothing more."
With great shame I understood my mistake:
"Master," I said, "truly, your forgiveness I implore;
but the fact is I thought it was not used anymore.
A dark window and nothing more."
"Learn it well, little kid," he spoke.
"In the terminal you can trust, it's your friend and much, much more."
Step by step, my master showed me to connect with the terminal,
And restarting each one
With infinite patience, he taught me
That from that dark window I should not separate
Never, nevermore.
**Moral of the story:** Fluency in the command-line terminal is a skill often abandoned and considered archaic by newer generations, but it improves your flexibility and productivity as a sysadmin in obvious and subtle ways.
### Troll bridge
I'd been a sysadmin for three or four years when one of my old mentors was removed from work. The older man was known for making fun of the new guys in the group—the ones who brought from the university the desire to improve processes with the newly released community operating system. My manager assigned me the older man's office, a small space under the access stairs to the data center—"Troll Bridge," they called it—and the few legacy servers he still managed.
While reviewing those legacy servers, I realized most of them had many scripts that did practically all the work. I just had to check that they did not go offline due to an electrical failure. I started using those methods, adapting them so my own servers would work the same way, making my tasks more efficient and, at the same time, requiring less of my time to complete them. My day soon became surfing the internet, watching funny videos, and even participating in internet forums.
A couple of years went by, and I maintained my work in the same way. When a new server arrived, I automated its tasks so I could free myself and continue with my usual participation in internet forums. One day, when I shared one of my scripts in the internet forum, a new admin told me I could simplify it using one novelty language, a new trend that was becoming popular among the new folks.
"I am a sysadmin, not a programmer," I answered. "They will never be the same."
From that day on, I dedicated myself to ridiculing the kids who told me I should program in the new languages.
"You do not know, newbie," I answered every time, "this job will never change."
A few years later, my responsibilities increased, and my manager wanted me to modify the code of the applications hosted on my server.
"That's what the job is about now," said my manager. "Development and operations are joining; if you're not willing to do it, we'll bring in some guy who does."
"I will never do it, it's not my role," I said.
"Well then…" he said, looking at me harshly.
I've been here ever since. Hiding. Waiting. Under my bridge.
I watch from the shadows as the people pass: up the stairs, muttering, or talking about the things the new applications do. Sometimes people pause beneath my bridge, to talk, or share code, or make plans. And I watch them, but they don't see me.
I'm just going to stay here, in the darkness under the bridge. I can hear you all out there, everything you say.
Oh yes, I can hear you.
But I'm not coming out.
**Moral of the story:** "The lazy sysadmin is the best sysadmin" is a well-known phrase that means if we are proactive enough to automate all our processes properly, we will have a lot of free time. The best sysadmins never seem to be very busy; they prefer to be relaxed and let the system do the work for them. "Work smarter not harder." However, if we don't use this free time productively, we can fall into obsoleteness and become something we do not want. The best sysadmins reinvent themselves constantly; they are always researching and learning.
Following these stories' morals—and continually learning from my mistakes—helped me improve my management skills and create the good habits necessary for the sysadmin job.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/3-scary-sysadmin-stories
作者:[Alex Callejas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/darkaxl
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Hocus_Pocus_(1993_film)

View File

@ -1,146 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Rise and Demise of RSS)
[#]: via: (https://twobithistory.org/2018/12/18/rss.html)
[#]: author: (Two-Bit History https://twobithistory.org)
The Rise and Demise of RSS
======
This post was originally published on [September 16th, 2018][1]. What follows is a revision that includes additional information gleaned from interviews with Ramanathan Guha, Ian Davis, Dan Libby, and Kevin Werbach.
About a decade ago, the average internet user might well have heard of RSS. Really Simple Syndication, or Rich Site Summary—what the acronym stands for depends on who you ask—is a standard that websites and podcasts can use to offer a feed of content to their users, one easily understood by lots of different computer programs. Today, though RSS continues to power many applications on the web, it has become, for most people, an obscure technology.
The story of how this happened is really two stories. The first is a story about a broad vision for the webs future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
In the late 1990s, in the go-go years between Netscapes IPO and the Dot-com crash, everyone could see that the web was going to be an even bigger deal than it already was, even if they didnt know exactly how it was going to get there. One theory was that the web was about to be revolutionized by syndication. The web, originally built to enable a simple transaction between two parties—a client fetching a document from a single host server—would be broken open by new standards that could be used to repackage and redistribute entire websites through a variety of channels. Kevin Werbach, writing for Release 1.0, a newsletter influential among investors in the 1990s, predicted that syndication “would evolve into the core model for the Internet economy, allowing businesses and individuals to retain control over their online personae while enjoying the benefits of massive scale and scope.”
He invited his readers to imagine a future in which fencing aficionados, rather than going directly to an “online sporting goods site” or “fencing equipment retailer,” could buy a new épée directly through e-commerce widgets embedded into their favorite website about fencing. Just like in the television world, where big networks syndicate their shows to smaller local stations, syndication on the web would allow businesses and publications to reach consumers through a multitude of intermediary sites. This would mean, as a corollary, that consumers would gain significant control over where and how they interacted with any given business or publication on the web.
RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.” Another contemporaneous article called RSS the first protocol to realize the potential of XML. It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, after the rise of social media and Googles decision to shut down Google Reader, RSS appears to be [a slowly dying technology][2], now used chiefly by podcasters, programmers with tech blogs, and the occasional journalist. Though of course some people really do still rely on RSS readers, stubbornly adding an RSS feed to your blog, even in 2018, is a political statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbachs imagining.
The future once looked so bright for RSS. What happened? Was its downfall inevitable, or was it precipitated by the bitter infighting that thwarted the development of a single RSS standard?
### Muddied Water
RSS was invented twice. This meant it never had an obvious owner, a state of affairs that spawned endless debate and acrimony. But it also suggests that RSS was an important idea whose time had come.
In 1998, Netscape was struggling to envision a future for itself. Its flagship product, the Netscape Navigator web browser—once preferred by over 80 percent of web users—was quickly losing ground to Microsofts Internet Explorer. So Netscape decided to compete in a new arena. In May, a team was brought together to start work on what was known internally as “Project 60.” Two months later, Netscape announced “My Netscape,” a web portal that would fight it out with other portals like Yahoo, MSN, and Excite.
The following year, in March, Netscape announced an addition to the My Netscape portal called the “My Netscape Network.” My Netscape users could now customize their My Netscape page so that it contained “channels” featuring the most recent headlines from sites around the web. As long as your favorite website published a special file in a format dictated by Netscape, you could add that website to your My Netscape page, typically by clicking an “Add Channel” button that participating websites were supposed to add to their interfaces. A little box containing a list of linked headlines would then appear.
![A My Netscape Network Channel][3] A My Netscape Network channel for Mozilla.org, as it might look to users
about to add it to their My Netscape page.
The special file that participating websites had to publish was an RSS file. In the My Netscape Network announcement, Netscape explained that RSS stood for “RDF Site Summary.” This was somewhat of a misnomer. RDF, or the Resource Description Framework, is basically a grammar for describing certain properties of arbitrary resources. (See [my article about the Semantic Web][4] if that sounds really exciting to you.) In 1999, a draft specification for RDF was being considered by the World Wide Web Consortium (W3C), the webs main standards body. Though RSS was supposed to be based on RDF, the example RSS document Netscape actually released didnt use any RDF tags at all. In a document that accompanied the Netscape RSS specification, Dan Libby, one of the specifications authors, explained that “in this release of MNN, Netscape has intentionally limited the complexity of the RSS format.” The specification was given the 0.90 version number, the idea being that subsequent versions would bring RSS more in line with the W3Cs XML specification and the evolving draft of the RDF specification.
RSS had been created by Libby and two other Netscape employees, Eckart Walther and Ramanathan Guha. According to an email to me from Guha, he and Walther cooked up RSS in the beginning with some input from Libby; after AOL bought Netscape in 1998, he and Walther left and it became Libbys responsibility. Before Netscape, Guha had worked for Apple, where he came up with something called the Meta Content Framework. MCF was a format for representing metadata about anything from web pages to local files. Guha demonstrated its power by developing an application called [HotSauce][5] that visualized relationships between files as a network of nodes suspended in 3D space. Immediately after leaving Apple for Netscape, Guha worked with a Netscape consultant named Tim Bray, who in a post on his blog said that he and Guha eventually produced an XML-based version of MCF that in turn became the foundation for the W3Cs RDF draft. Its no surprise, then, that Guha, Walther, and Libby were keen to build on Guhas prior work and incorporate RDF into RSS. But Libby later wrote that the original vision for an RDF-based RSS was pared back because of time constraints and the perception that RDF was “too complex for the average user.’”
While Netscape was trying to win eyeballs in what became known as the “portal wars,” elsewhere on the web a new phenomenon known as “weblogging” was being pioneered. One of these pioneers was Dave Winer, CEO of a company called UserLand Software, which developed early content management systems that made blogging accessible to people without deep technical fluency. Winer ran his own blog, [Scripting News][6], which today is one of the oldest blogs on the internet. More than a year before Netscape announced My Netscape Network, on December 15, 1997, Winer published a post announcing that the blog would now be available in XML as well as HTML.
Dave Winers XML format became known as the Scripting News format. It was supposedly similar to Microsofts Channel Definition Format (a “push technology” standard submitted to the W3C in March, 1997), but I havent been able to find a file in the original format to verify that claim. Like Netscapes RSS, it structured the content of Winers blog so that it could be understood by other software applications. When Netscape released RSS 0.90, Winer and UserLand Software began to support both formats. But Winer believed that Netscapes format was “woefully inadequate” and “missing the key thing web writers and readers need.” It could only represent a list of links, whereas the Scripting News format could represent a series of paragraphs, each containing one or more links.
In June 1999, two months after Netscapes My Netscape Network announcement, Winer introduced a new version of the Scripting News format, called ScriptingNews 2.0b1. Winer claimed that he decided to move ahead with his own format only after trying but failing to get anyone at Netscape to care about RSS 0.90s deficiencies. The new version of the Scripting News format added several items to the `<header>` element that brought the Scripting News format to parity with RSS. But the two formats continued to differ in that the Scripting News format, which Winer nicknamed the “fat” syndication format, could include entire paragraphs and not just links.
Netscape got around to releasing RSS 0.91 the very next month. The updated specification was a major about-face. RSS no longer stood for “RDF Site Summary”; it now stood for “Rich Site Summary.” All the RDF—and there was almost none anyway—was stripped out. Many of the Scripting News tags were incorporated. In the text of the new specification, Libby explained:
> RDF references removed. RSS was originally conceived as a metadata format providing a summary of a website. Two things have become clear: the first is that providers want more of a syndication format than a metadata format. The structure of an RDF file is very precise and must conform to the RDF data model in order to be valid. This is not easily human-understandable and can make it difficult to create useful RDF files. The second is that few tools are available for RDF generation, validation and processing. For these reasons, we have decided to go with a standard XML approach.
Winer was enormously pleased with RSS 0.91, calling it “even better than I thought it would be.” UserLand Software adopted it as a replacement for the existing ScriptingNews 2.0b1 format. For a while, it seemed that RSS finally had a single authoritative specification.
### The Great Fork
A year later, the RSS 0.91 specification had become woefully inadequate. There were all sorts of things people were trying to do with RSS that the specification did not address. There were other parts of the specification that seemed unnecessarily constraining—each RSS channel could only contain a maximum of 15 items, for example.
By that point, RSS had been adopted by several more organizations. Other than Netscape, which seems to have lost interest after RSS 0.91, the big players were Dave Winers UserLand Software; OReilly Net, which ran an RSS aggregator called Meerkat; and Moreover.com, which also ran an RSS aggregator focused on news. Via mailing list, representatives from these organizations and others regularly discussed how to improve on RSS 0.91. But there were deep disagreements about what those improvements should look like.
The mailing list in which most of the discussion occurred was called the Syndication mailing list. [An archive of the Syndication mailing list][7] is still available. It is an amazing historical resource. It provides a moment-by-moment account of how those deep disagreements eventually led to a political rupture of the RSS community.
On one side of the coming rupture was Winer. Winer was impatient to evolve RSS, but he wanted to change it only in relatively conservative ways. In June, 2000, he published his own RSS 0.91 specification on the UserLand website, meant to be a starting point for further development of RSS. It made no significant changes to the 0.91 specification published by Netscape. Winer claimed in a blog post that accompanied his specification that it was only a “cleanup” documenting how RSS was actually being used in the wild, which was needed because the Netscape specification was no longer being maintained. In the same post, he argued that RSS had succeeded so far because it was simple, and that by adding namespaces (a way to explicitly distinguish between different RSS vocabularies) or RDF back to the format—some had suggested this be done in the Syndication mailing list—it “would become vastly more complex, and IMHO, at the content provider level, would buy us almost nothing for the added complexity.” In a message to the Syndication mailing list sent around the same time, Winer suggested that these issues were important enough that they might lead him to create a fork:
> Im still pondering how to move RSS forward. I definitely want ICE-like stuff in RSS2, publish and subscribe is at the top of my list, but I am going to fight tooth and nail for simplicity. I love optional elements. I dont want to go down the namespaces and schema road, or try to make it a dialect of RDF. I understand other people want to do this, and therefore I guess were going to get a fork. I have my own opinion about where the other fork will lead, but Ill keep those to myself for the moment at least.
Arrayed against Winer were several other people, including Rael Dornfest of OReilly, Ian Davis (responsible for a search startup called Calaba), and a precocious, 14-year-old Aaron Swartz. This is the same Aaron Swartz that would later co-found Reddit and become famous for his hacktivism. (In 2000, according to an email to me from Davis, his dad often accompanied him to technology meetups.) Dornfest, Davis, and Swartz all thought that RSS needed namespaces in order to accommodate the many different things everyone wanted to do with it. On another mailing list hosted by OReilly, Davis proposed a namespace-based module system, writing that such a system would “make RSS as extensible as we like rather than packing in new features that over-complicate the spec.” The “namespace camp” believed that RSS would soon be used for much more than the syndication of blog posts, so namespaces, rather than being a complication, were the only way to keep RSS from becoming unmanageable as it supported more and more use cases.
At the root of this disagreement about namespaces was a deeper disagreement about what RSS was even for. Winer had invented his Scripting News format to syndicate the posts he wrote for his blog. Netscape had released RSS as “RDF Site Summary” because it was a way of recreating a site in miniature within the My Netscape online portal. Some people felt that Netscapes original vision should be honored. Writing to the Syndication mailing list, Davis explained his view that RSS was “originally conceived as a way of building mini sitemaps,” and that now he and others wanted to expand RSS “to encompass more types of information than simple news headlines and to cater for the new uses of RSS that have emerged over the last 12 months.” This was a sensible point to make because the goal of the Netscape RSS project in the beginning was even loftier than Davis suggests: Guha told me that he wanted to create a technology that could support not just website channels but feeds about arbitrary entities such as, for example, Madonna. Further developing RSS so that it could do this would indeed be in keeping with that original motivation. But Davis argument also overstates the degree to which there was a unified vision at Netscape by the time the RSS specification was published. According to Libby, who I talked to via email, there was eventually contention between a “Lets Build the Semantic Web” group and “Lets Make This Simple for People to Author” group even within Netscape.
For his part, Winer argued that Netscapes original goals were irrelevant because his Scripting News format was in fact the first RSS and it had been meant for a very different purpose. Given that the people most involved in the development of RSS disagreed about who had created RSS and why, a fork seems to have been inevitable.
The fork happened after Dornfest announced a proposed RSS 1.0 specification and formed the RSS-DEV Working Group—which would include Davis, Swartz, and several others but not Winer—to get it ready for publication. In the proposed specification, RSS once again stood for “RDF Site Summary,” because RDF had been added back in to represent metadata properties of certain RSS elements. The specification acknowledged Winer by name, giving him credit for popularizing RSS through his “evangelism.” But it also argued that RSS could not be improved in the way that Winer was advocating. Just adding more elements to RSS without providing for extensibility with a module system would “sacrifice scalability.” The specification went on to define a module system for RSS based on XML namespaces.
Winer felt that it was “unfair” that the RSS-DEV Working Group had arrogated the “RSS 1.0” name for themselves. In another mailing list about decentralization, he wrote that he had “recently had a standard stolen by a big name,” presumably meaning OReilly, which had convened the RSS-DEV Working Group. Other members of the Syndication mailing list also felt that the RSS-DEV Working Group should not have used the name “RSS” without unanimous agreement from the community on how to move RSS forward. But the Working Group stuck with the name. Dan Brickley, another member of the RSS-DEV Working Group, defended this decision by arguing that “RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc).” He essentially felt that the RSS 1.0 effort had a better claim to the RSS name than Winer did, since RDF had originally been a part of RSS. The RSS-DEV Working Group published a final version of their specification in December. That same month, Winer published his own improvement to RSS 0.91, which he called RSS 0.92, on UserLands website. RSS 0.92 made several small optional improvements to RSS, among which was the addition of the `<enclosure>` tag soon used by podcasters everywhere. RSS had officially forked.
The fork might have been avoided if a better effort had been made to include Winer in the RSS-DEV Working Group. He obviously belonged there. He was a prominent contributor to the Syndication mailing list and responsible for much of RSS popularity, as the members of the Working Group themselves acknowledged. But, as Davis wrote in an email to me, Winer “wanted control and wanted RSS to be his legacy so was reluctant to work with us.” Tim OReilly, founder and CEO of OReilly, explained in a UserLand discussion group in September, 2000 that Winer basically refused to participate:
> A group of people involved in RSS got together to start thinking about its future evolution. Dave was part of the group. When the consensus of the group turned in a direction he didnt like, Dave stopped participating, and characterized it as a plot by OReilly to take over RSS from him, despite the fact that Rael Dornfest of OReilly was only one of about a dozen authors of the proposed RSS 1.0 spec, and that many of those who were part of its development had at least as long a history with RSS as Dave had.
To this, Winer said:
> I met with Dale [Dougherty] two weeks before the announcement, and he didnt say anything about it being called RSS 1.0. I spoke on the phone with Rael the Friday before it was announced, again he didnt say that they were calling it RSS 1.0. The first I found out about it was when it was publicly announced.
>
> Let me ask you a straight question. If it turns out that the plan to call the new spec “RSS 1.0” was done in private, without any heads-up or consultation, or for a chance for the Syndication list members to agree or disagree, not just me, what are you going to do?
>
> UserLand did a lot of work to create and popularize and support RSS. We walked away from that, and let your guys have the name. Thats the top level. If I want to do any further work in Web syndication, I have to use a different name. Why and how did that happen Tim?
I have not been able to find a discussion in the Syndication mailing list about using the RSS 1.0 name prior to the announcement of the RSS 1.0 proposal. Winer, in a message to me, said that he was not trying to control RSS and just wanted to use it in his products.
RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by [a proposed IETF standard][8]. After the introduction of Atom, there were three competing versions of RSS: Winers RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Groups RSS 1.0, and Atom.
### Decline
The proliferation of competing RSS specifications may have hampered RSS in other ways that Ill discuss shortly. But it did not stop RSS from becoming enormously popular during the 2000s. By 2004, the New York Times had started offering its headlines in RSS and had written an article explaining to the layperson what RSS was and how to use it. Google Reader, the RSS aggregator ultimately used by millions, was launched in 2005. By 2013, RSS seemed popular enough that the New York Times, in its obituary for Aaron Swartz, called the technology “ubiquitous.” For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
The New York Times published Swartz obituary in January 2013. By that point, though, RSS had actually turned a corner and was well on its way to becoming an obscure technology. Google Reader was shut down in July 2013, ostensibly because user numbers had been falling “over the years.” This prompted several articles from various outlets declaring that RSS was dead. But people had been declaring that RSS was dead for years, even before Google Readers shuttering. Steve Gillmor, writing for TechCrunch in May 2009, advised that “its time to get completely off RSS and switch to Twitter” because “RSS just doesnt cut it anymore.” He pointed out that Twitter was basically a better RSS feed, since it could show you what people thought about an article in addition to the article itself. It allowed you to follow people and not just channels. Gillmor told his readers that it was time to let RSS recede into the background. He ended his article with a verse from Bob Dylans “Forever Young.”
Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+. Google might have been able to monetize Google+ in a way that it could never have monetized Google Reader. Marco Arment, the creator of Instapaper, wrote on his blog in 2013:
> Google Reader is just the latest casualty of the war that Facebook started, seemingly accidentally: the battle to own everything. While Google did technically “own” Reader and could make some use of the huge amount of news and attention data flowing through it, it conflicted with their far more important Google+ strategy: they need everyone reading and sharing everything through Google+ so they can compete with Facebook for ad-targeting data, ad dollars, growth, and relevance.
So both users and technology companies realized that they got more out of using social networks than they did out of RSS.
Another theory is that RSS was always too geeky for regular people. Even the New York Times, which seems to have been eager to adopt RSS and promote it to its audience, complained in 2006 that RSS is a “not particularly user friendly” acronym coined by “computer geeks.” Before the RSS icon was designed in 2004, websites like the New York Times linked to their RSS feeds using little orange boxes labeled “XML,” which can only have been intimidating. The label was perfectly accurate though, because back then clicking the link would take a hapless user to a page full of XML. [This great tweet][9] captures the essence of this explanation for RSS demise. Regular people never felt comfortable using RSS; it hadnt really been designed as a consumer-facing technology and involved too many hurdles; people jumped ship as soon as something better came along.
RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. Maybe browser support could have been improved. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. When they failed to agree on a single standard, effort that could have gone into improving RSS was instead squandered on duplicating work that had already been done. Davis told me, for example, that Atom would not have been necessary if the members of the Syndication mailing list had been able to compromise and collaborate, and “all that cleanup work could have been put into RSS to strengthen it.” So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently its the politics far more than the serialization thats far from simple.”
So today we are left with centralized silos of information. Even so, the syndicated web that Werbach foresaw in 1999 has been realized, just not in the way he thought it would be. After all, The Onion is a publication that relies on syndication through Facebook and Twitter the same way that Seinfeld relied on syndication to rake in millions after the end of its original run. I asked Werbach what he thinks about this and he more or less agrees. He told me that RSS, on one level, was clearly a failure, because it isnt now “a technology that is really the core of the whole blogging world or content world or world of assembling different elements of things into sites.” But, on another level, “the whole social media revolution is partly about the ability to aggregate different content and resources” in a manner reminiscent of RSS and his original vision for a syndicated web. To Werbach, “its the legacy of RSS, even if its not built on RSS.”
Unfortunately, syndication on the modern web still only happens through one of a very small number of channels, meaning that none of us “retain control over our online personae” the way that Werbach imagined we would. One reason this happened is garden-variety corporate rapaciousness—RSS, an open format, didnt give technology companies the control over data and eyeballs that they needed to sell ads, so they did not support it. But the more mundane reason is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][10] on Twitter or subscribe to the [RSS feed][11] to make sure you know when a new post is out.
Previously on TwoBitHistory…
> I've long wondered if the Unix commands on my Macbook are built from the same code that they were built from 20 or 30 years ago. The answer, it turns, out, is "kinda"!
>
> My latest post, on how the implementation of cat has changed over the years:<https://t.co/dHizjK50ES>
>
> — TwoBitHistory (@TwoBitHistory) [November 12, 2018][12]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/12/18/rss.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html
[2]: https://trends.google.com/trends/explore?date=all&geo=US&q=rss
[3]: https://twobithistory.org/images/mnn-channel.gif
[4]: https://twobithistory.org/2018/05/27/semantic-web.html
[5]: http://web.archive.org/web/19970703020212/http://mcf.research.apple.com:80/hs/screen_shot.html
[6]: http://scripting.com
[7]: https://groups.yahoo.com/neo/groups/syndication/info
[8]: https://tools.ietf.org/html/rfc4287
[9]: https://twitter.com/mgsiegler/status/311992206716203008
[10]: https://twitter.com/TwoBitHistory
[11]: https://twobithistory.org/feed.xml
[12]: https://twitter.com/TwoBitHistory/status/1062114484209311746?ref_src=twsrc%5Etfw

View File

@ -1,62 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Let your engineers choose the license: A guide)
[#]: via: (https://opensource.com/article/19/2/choose-open-source-license-engineers)
[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
Let your engineers choose the license: A guide
======
Enabling engineers to make licensing decisions is wise and efficient.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
Imagine you are working for a company that will be starting a new open source community project. Great! You have taken a positive first step to give back and enable a virtuous cycle of innovation that the open source community-based development model provides.
But what about choosing an open source license for your project? You ask your manager for guidance, and she provides some perspective but quickly realizes that there is no formal company policy or guidelines. As any wise manager would do, she asks you to develop formal corporate guidelines for choosing an open source license for such projects.
Simple, right? You may be surprised to learn some unexpected challenges. This article will describe some of the complexities you may encounter and some perspective based on my recent experience with a similar project at Red Hat.
It may be useful to quickly review some of the more common forms of open source licensing. Open source licenses may be generally placed into two main buckets, copyleft and permissive.
> Copyleft licenses, such as the GPL, allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. Copyleft additionally provides that essential software freedoms (run, study, change, and distribution) will be allowed and ensured for any recipients of that code. A copyleft license prohibits restrictions or limitations on these essential software freedoms.
>
> Permissive licenses, similar to copyleft, also generally allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. However, unlike copyleft licenses, additional restrictions may be included with these forms of licenses, including proprietary limitations such as prohibiting the creation of modified works or further distribution.
Red Hat is one of the leading open source development companies, with thousands of open source developers continuously working upstream and contributing to an assortment of open source projects. When I joined Red Hat, I was very familiar with its flagship Red Hat Enterprise Linux offering, often referred to as RHEL. Although I fully expected that the company contributes under a wide assortment of licenses based on project requirements, I thought our preference and top recommendation for our engineers would be GPLv2 due to our significant involvement with Linux. In addition, GPL is a copyleft license, and copyleft ensures that the essential software freedoms (run, study, change, distribute) will be extended to any recipients of that code. What could be better for sustaining the open source ecosystem than a copyleft license?
Fast forwarding on my journey to craft internal license choice guidelines for Red Hat, the end result was to not have any license preference at all. Instead, we delegate that responsibility, to the maximum extent possible, to our engineers. Why? Because each open source project and community is unique and there are social aspects to these communities that may have preferences towards various licensing philosophies (e.g., copyleft or permissive). Engineers working in those communities understand all these issues and are best equipped to choose the proper license on this knowledge. Mandating certain licenses for code contributions often will conflict with these community norms and result in reduction or prohibition in contributed content.
For example, perhaps your organization believes that the latest GPL license (GPLv3) is the best for your company due to its updated provisions. If you mandated GPLv3 for all future contributions vs. GPLv2, you would be prohibited from contributing code to the Linux kernel, since that is a GPLv2 project and will likely remain that way for a very long time. Your engineers, being part of that open source community project, would know that and would automatically choose GPLv2 in the absence of such a mandate.
Bottom line: Enabling engineers to make these decisions is wise and efficient.
To the extent your organization may have to restrict the use of certain licenses (e.g., due to certain intellectual property concerns), this should naturally be part of your guidelines or policy. I believe it is much better to delegate to the maximum extent possible to those that understand all the nuances, politics, and licensing philosophies of these varied communities and restrict license choice only when absolutely necessary. Even having a preference for a certain license over another can be problematic. Open source engineers may have deeply rooted feelings about copyleft (either for or against), and forcing one license over the other (unless absolutely necessary for business reasons) may result in creating ill-will and ostracizing an engineer or engineering department within your organization
In summary, Red Hat's guidelines are very simple and are summarized below:
1. We suggest choosing an open source license from a set of 10 different licenses that are very common and meet the needs of most new open source projects.
2. We allow the use of other licenses but we ask that a reason is provided to the open source legal team so we can collect and better understand some of the new and perhaps evolving needs of the open source communities that we serve. (As stated above, our engineers are on the front lines and are best equipped to deliver this type of information.)
3. The open source legal team always has the right to override a decision, but this would be very rare and only would occur if we were aware of some community or legal concern regarding a specific license or project.
4. Publishing source code without a license is never permitted.
In summary, the advantages of these guidelines are enormous. They are very efficient and lead to a very low-friction development and approval system within our organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/choose-open-source-license-engineers
作者:[Jeffrey Robert Kaufman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jkaufman
[b]: https://github.com/lujun9972

View File

@ -1,113 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Brief History of FOSS Practices)
[#]: via: (https://itsfoss.com/history-of-foss)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
A Brief History of FOSS Practices
======
We focus a great deal about Linux and Free & Open Source Software at Its FOSS. Ever wondered how old such FOSS Practices are? How did this Practice come by? What is the History behind this revolutionary concept?
In this history and trivia article, lets take a look back in time through this brief write-up and note some interesting initiatives in the past that have grown so huge today.
### Origins of FOSS
![History of FOSS][1]
The origins of FOSS goes back to the 1950s. When hardware was purchased, there used to be no additional charges for bundled software and the source code would also be available in order to fix possible bugs in the software.
It was actually a common Practice back then for users with the freedom to customize the code.
At that time, mostly academicians and researchers in the industry were the collaborators to develop such software.
The term Open Source was not there yet. Instead, the term that was popular at that time was “Public Domain Software”. As of today, ideologically, both are very much [different][2] in nature even though they may sound similar.
<https://youtu.be/0Dt3MCcXay8?list=PLybyE6hxfb7cIzYK1HegM3-ccU-JhovpH>
Back in 1955, some users of the [IBM 701][3] computer system from Los Angeles, voluntarily founded a group called SHARE. The “SHARE Program Library Agency” (SPLA) distributed information and software through magnetic tapes.
Technical information shared, was about programming languages, operating systems, database systems, and user experiences for enterprise users of small, medium, and large-scale IBM computers.
The initiative that is now more than 60 years old, continues to follow its goals quite actively. SHARE has its upcoming event coming up as [SHARE Phoenix 2019][4]. You can download and check out their complete timeline [here][5].
### The GNU Project
Announced at MIT on September 27, 1983, by Richard Stallman, the GNU Project is what immensely empowers and supports the Free Software Community today.
### Free Software Foundation
The “Free Software Movement” by Richard Stallman established a new norm for developing Free Software.
He founded The Free Software Foundation (FSF) on 4th October 1985 to support the free software movement. Software that ensures that the end users have freedom in using, studying, sharing and modifying that software came to be called as Free Software.
**Free as in Free Speech, Not Free Beer**
<https://youtu.be/MtNcxMuphLc>
The Free Software Movement laid the following rules to establish the distinctiveness of the idea:
* The freedom to run the program as you wish, for any purpose (freedom 0).
* The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help your neighbor (freedom 2).
* The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
### The Linux Kernel
<https://youtu.be/RkUrOSQF1JQ>
How can we miss this section at Its FOSS! The Linux kernel was released as freely modifiable source code in 1991 by Linus Torvalds. At first, it was neither Free Software nor used an Open-source software license. In February 1992, Linux was relicensed under the GPL.
### The Linux Foundation
The Linux Foundation has a goal to empower open source projects to accelerate technology development and commercial adoption. It is an initiative that was taken in 2000 via the [Open Source Development Labs][6] (OSDL) which later merged with the [Free Standards Group][7].
Linus Torvalds works at The Linux Foundation who provide complete support to him so that he can work full-time on improving Linux.
### Open Source
When the source code of [Netscape][8] Communicator was released in 1998, the label “Open Source” was adopted by a group of individuals at a strategy session held on February 3rd, 1998 in Palo Alto, California. The idea grew from a visionary realization that the [Netscape announcement][9] had changed the way people looked at commercial software.
This opened up a whole new world, creating a new perspective that revealed the superiority and advantage of an open development process that could be powered by collaboration.
[Christine Peterson][10] was the one among that group of individuals who originally suggested the term “Open Source” as we perceive today (mentioned [earlier][11]).
### Evolution of Business Models
The concept of Open Source is a huge phenomenon right now and there are several companies that continue to adopt the Open Source Approach to this day. [As of April 2015, 78% of companies used Open Source Software][12] with different [Open Source Licenses][13].
Several organisations have adopted [different business models][14] for Open Source. Red Hat and Mozilla are two good examples.
So this was a brief recap of some interesting facts from FOSS History. Do let us know your thoughts if you want to share in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-foss
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/history-of-foss.png?resize=800%2C450&ssl=1
[2]: https://opensource.org/node/878
[3]: https://en.wikipedia.org/wiki/IBM_701
[4]: https://event.share.org/home
[5]: https://www.share.org/d/do/11532
[6]: https://en.wikipedia.org/wiki/Open_Source_Development_Labs
[7]: https://en.wikipedia.org/wiki/Free_Standards_Group
[8]: https://en.wikipedia.org/wiki/Netscape
[9]: https://web.archive.org/web/20021001071727/http://wp.netscape.com:80/newsref/pr/newsrelease558.html
[10]: https://en.wikipedia.org/wiki/Christine_Peterson
[11]: https://itsfoss.com/nanotechnology-open-science-ai/
[12]: https://www.zdnet.com/article/its-an-open-source-world-78-percent-of-companies-run-open-source-software/
[13]: https://itsfoss.com/open-source-licenses-explained/
[14]: https://opensource.com/article/17/12/open-source-business-models

View File

@ -1,95 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Scrum vs. kanban: Which agile methodology is better?)
[#]: via: (https://opensource.com/article/19/8/scrum-vs-kanban)
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
Scrum vs. kanban: Which agile methodology is better?
======
Learn the differences between scrum and kanban and which may be best for
your team.
![Team checklist and to dos][1]
Because scrum and kanban both fall under the agile methodology umbrella, many people confuse them or think they're the same thing. There are differences, however. For one, scrum is more specific to software development teams, while kanban is used by many kinds of teams and focuses on providing a visual representation of an agile team's workflow. Some argue that kanban is about getting things done, and scrum is about talking about getting things done.
### A history lesson
Before we get too deep into scrum and kanban, let's talk a little history. Before scrum, kanban, and agile, there was the waterfall model. It was popular in the '80s and '90s, especially in civil and mechanical engineering where changes were rare and design often stayed the same. It was adopted for software development, but it didn't translate well into that arena, with results rarely as anyone expected or desired.
In 2001, the [Agile Manifesto][2] emerged as an alternative to overcome the problems with waterfall. The Manifesto outlined agile principles and beliefs including shorter lead times, open communication, lighter processes, continuous training, and adaptation to change. These principles took on a life of their own when it came to software development practices and teams. In cases of irregularities, bugs, or dissatisfied customers, agile enabled development teams to make changes quickly, and software was released faster with much higher quality.
### What is agile?
An agile framework (or just agile) is an umbrella term for several iterative and incremental software development approaches such as kanban and scrum. Kanban and scrum are also considered to be agile frameworks on their own. As [Mendix explains][3]:
> "While each agile methodology type has its own unique qualities, they all incorporate elements of iterative development and continuous feedback when creating an application. Any agile development project involves continuous planning, continuous testing, continuous integration, and other forms of continuous development of both the project and the application resulting from the agile framework."
### What is kanban?
[Kanban][4] is the Japanese word for "visual signal." It is also an agile framework or work management system and is considered to be a powerful project management tool.
A kanban board (such as [Wekan][5], an open source kanban application) is a visual method for managing the creation of products through a series of fixed steps. It emphasizes continuous flow and is designed as a list of stages displayed in columns on a board. There is a waiting or backlog stage at the start of the kanban board, and there may be some progress stages, such as testing, development, completed, or abandoned.
![Wekan kanban board][6]
Each task or part of a project is represented on a card, and the cards are moved across this board as they progress across the stages. A card's current stage must be completed before it can be moved to the next stage.
Other features of kanban include color-coding (to identify different stages or types of tasks visually) and Work in Progress ([WIP][7]) limits (to restrict the maximum number of work items allowed in the different stages of the workflow).
Wekan is [similar to Trello][8] (a proprietary kanban application). It's one of [a variety][9] of digital kanban tools. Teams can also use the traditional kanban approach: a wall, a board, or a large piece of paper with different colored sticky notes for various tasks. Whatever method you use, the idea is to apply agile effectively, efficiently, and continuously.
Overall, kanban and Wekan offer a simple, graphical way of monitoring progress, sharing responsibility, and mitigating bottlenecks. It is a team effort to ensure that the final product is created with high quality and to the customers' satisfaction.
### What is scrum?
[Scrum][10] typically involves daily standups and sprints with sprint planning, sprint reviews, and retrospectives. It establishes specific release days and where cards can move across the board. There are daily scrums and two- to four-week sprints (putting code into production) with the goal to create a shippable product after every sprint.
![team_meeting_at_board.png][11]
Daily stand-up meetings allow team members to share progress. (Photo credit: Andrea Truong)
 
Scrum teams are usually comprised of a scrum master, a product owner, and the development team. All must operate in synchronicity to produce high-quality software products in a fast, efficient, cost-effective way that pleases the customer.
### Which is better: scrum or kanban?
With all that as background, the important question we are left with is: Which agile methodology is superior, kanban, or scrum? Well, it depends. It is certainly not a straightforward or easy choice, and neither method is inherently superior. The type of team and the project's scope or requirements influence which is likely to be the better choice.
Software development teams typically use scrum because it has been found to be highly useful in the software lifecycle process.
Kanban can be used by all kinds of teams—IT, marketing, HR, transformation, manufacturing, healthcare, finance, etc. Its core values are continuous workflow, continuous feedback, continuous change, and stir vigorously until you achieve the desired quality and consistency or create a shippable product. The team works from the backlog until all tasks are completed. Usually, members will pick tasks based on their specialized knowledge or area of expertise, but the team must be careful not to reduce its effectiveness with too much specialization.
### Conclusion
There is a place for both scrum and kanban agile frameworks, and their utility is determined by the makeup of the team, the product or service to be delivered, the requirements or scope of the project, and the organizational culture. There will be trial and error, especially for new teams.
Scrum and kanban are both iterative work systems that rely on process flows and aim to reduce waste. No matter which framework your team chooses, you will be a winner. Both methodologies are valuable now and likely will be for some time to come.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/scrum-vs-kanban
作者:[Taz Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/heronthecli
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
[2]: https://www.scrumalliance.org/resources/agile-manifesto
[3]: https://www.mendix.com/agile-framework/
[4]: https://en.wikipedia.org/wiki/Kanban
[5]: https://wekan.github.io/
[6]: https://opensource.com/sites/default/files/uploads/wekan-board.png (Wekan kanban board)
[7]: https://www.atlassian.com/agile/kanban/wip-limits
[8]: https://opensource.com/article/19/1/productivity-tool-wekan
[9]: https://opensource.com/alternatives/trello
[10]: https://en.wikipedia.org/wiki/Scrum_(software_development)
[11]: https://opensource.com/sites/default/files/uploads/team_meeting_at_board.png (team_meeting_at_board.png)

View File

@ -1,57 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why support open source? Strategies from around the world)
[#]: via: (https://opensource.com/article/19/8/open-source-large-series)
[#]: author: (Karl Fogel https://opensource.com/users/kfogelhttps://opensource.com/users/jamesvasile)
Why support open source? Strategies from around the world
======
A new series on open source strategy digs into using open source
investments to support the overall mission.
![World locations with red dots with a sun burst background][1]
There are many excellent resources available to teach you how to run an open source project—how to set up the collaboration tools, how to get the community engaged, etc. But there is much less out there about open source _strategy_; that is, about how to use well-considered open source investments to support an overall mission.
Thus, while "How can we integrate new contributors?" is a project management concern, the strategic question it grows from has wider implications: "What are the long-term returns we expect from engaging with others, who are those others, and how do we structure our investments to achieve those returns?"
As we work with different clients, we've been gradually publishing reports that approach open source strategy from various directions. Two examples are our work with Mozilla on [open source archetypes][2] and with the World Bank on its [investment strategy for the GeoNode project][3].
Now we have a chance to have this discussion in a more regular and complete way: Microsoft has asked us to do a series of blog posts about open source, and the request was essentially _"help organizations get better at open source"_ (not a direct quote, but a reasonable summary). They were very clear about the series being independent; they did not want editorial control and specifically did not want to be involved in any pre-approval before we publish a post. It goes without saying, but we'll say it anyway, just so there's no doubt, that the views we express in the series may or may not be shared by Microsoft.
We're calling the series [Open Source At Large][4], and it focuses on open source strategy. The first three posts in the series are already up:
* [What is open source strategy?][5]
* [Open source goal setting][6]
* [Ecosystem mapping][7]
Our clients will recognize some of the material—our advice tends to be consistent over time—but the series will also cover ideas we have not discussed widely before.
Strategy is not just for executives and managers, by the way. We can most effectively support strategies we understand, and every person on a team can use strategic awareness to improve performance. Our target audience is the managers and organization leaders who make decisions about open source investments and also developers who can benefit from a strategic viewpoint.
We hope that offering techniques for strategic analysis will be useful for newcomers to open source, and we also look forward to engaging colleagues in a wide-ranging discussion about best practices and considered approaches to strategy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/open-source-large-series
作者:[Karl Fogel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kfogelhttps://opensource.com/users/jamesvasile
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_remote_teams.png?itok=Wk1yBFv6 (World locations with red dots with a sun burst background)
[2]: https://blog.opentechstrategies.com/2018/05/field-guide-to-open-source-project-archetypes/
[3]: https://blog.opentechstrategies.com/2017/06/geonode-report/
[4]: https://blog.opentechstrategies.com/category/open-source-at-large/
[5]: https://blog.opentechstrategies.com/2019/05/what-is-open-source-strategy/
[6]: https://blog.opentechstrategies.com/2019/05/open-source-goal-setting/
[7]: https://blog.opentechstrategies.com/2019/06/ecosystem-mapping/

View File

@ -1,64 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT security essentials: Physical, network, software)
[#]: via: (https://www.networkworld.com/article/3435108/iot-security-essentials-physical-network-software.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT security essentials: Physical, network, software
======
Internet of things devices present unique security problems due to being spread out, exposed to physical attacks and often lacking processor power.
Thinkstock
Even in the planning stages of a deployment, IoT security is one of the chief stumbling blocks to successful adoption of the technology.
And while the problem is vastly complicated, there are three key angles to think about when laying out how IoT sensors will be deployed in any given setup: How secure are the device themselves, how many are there and can they receive security patches.
### Physical access
Physical access is an important but, generally, straightforward consideration for traditional IT security. Data centers can be carefully secured, and routers and switches are often located in places where theyre either difficult to fiddle with discreetly or difficult to access in the first place.
Where IoT is concerned, however, best security practices arent as fleshed out. Some types of IoT implementation could be relatively simple to secure a bad actor could find it comparatively difficult to tinker with a piece of complex diagnostic equipment in a well-secured hospital, or a big piece of sophisticated robotic manufacturing equipment on an access-controlled factory floor. Compromises can happen, certainly, but a bad actor trying to get into a secure area is still a well-understood security threat.
By contrast, smart city equipment scattered across a metropolis traffic cameras, smart parking meters, noise sensors and the like is readily accessible by the general public, to say nothing of anybody able to look convincing in a hard hat and hazard vest. The same issue applies to soil sensors in rural areas and any other technology deployed to a sufficiently remote location.
The solutions to this problem vary. Cases and enclosures could deter some attackers, but they might not be practical in some instances. The same goes for video surveillance of the devices, which could become a target itself. The IoT Security Foundation recommends disabling all ports on a device that arent strictly necessary for it perform its function, implementing tamper-proofing on circuit boards, and even embedding those circuits entirely in resin.
### Discovery and networking
Securing the connections between IoT sensors and the backend is arguably the toughest part to solve, in part because an alarming number of organizations arent even aware of all the devices on their network at any given time. Hence, device discovery remains a critically important part of network security for IoT.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][1] ]**
The main reason for this lack of visibility is that the nature of IoT as an operational technology, rather than one thats solely administered by IT staff, means that line-of-business personnel will sometimes connect helpful devices to the network without telling the people in charge of keeping the network secure. For network **operations** people, used to having a clear sense of the entire networks topology, this can be an unaccustomed headache.
Beyond IT personnel working closely with the operational side of the business to ensure all devices connected to the network are properly provisioned and monitored, network scanners can discover connected devices on a network automatically, whether thats via network traffic analysis, device profiles, whitelists or other techniques.
### Software patching
Many IoT sensors dont have a lot of built-in computing capability, so some of those devices arent able to run a security-software agent nor accept updates and patches remotely.
That is a huge worry, because there are software vulnerabilities being discovered every day that target the IoT. An inability to patch those holes when theyre discovered is a serious problem.
Moreover, certain devices simply wont be able to be properly secured and made patchable. The only solution might be to find a different product that accomplishes the functional task yet has better security.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3435108/iot-security-essentials-physical-network-software.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,143 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The monumental impact of C)
[#]: via: (https://opensource.com/article/19/10/command-line-heroes-c)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
The monumental impact of C
======
The season finale of Command Line Heroes offers a lesson in how a small
community of open source enthusiasts can change the world.
![In the finale of Command Line Heroes, we learn about the significant impact of C][1]
C is the original general-purpose programming language. The Season 3 finale of the [Command Line Heroes][2] podcast explores C's origin story in a way that showcases the longevity and power of its design. It's a perfect synthesis of all the languages discussed throughout the podcast's third season and this [series of articles][3].
![The original C programming guide by two of the language authors, circa 1978][4]
C is such a fundamental language that many of us forget how much it has changed. Technically a "high-level language," in the sense that it requires a compiler to be runnable, it's as close to assembly language as people like to get these days (outside of specialized, low-memory environments). It's also considered to be the language that made nearly all languages that came after it possible.
### The path to C began with failure
While the myth persists that all great inventions come from highly competitive garage dwellers, C's story is more fit for the Renaissance period.
In the 1960s, Bell Labs in suburban New Jersey was one of the most innovative places of its time. Jon Gertner, author of [_The idea factory_][5], describes the culture of the time marked by optimism and the excitement to solve tough problems. Instead of monetization pressures with tight timelines, Bell Labs offered seemingly endless funding for wild ideas. It had a research and development ethos that aligns well with today's [open leadership principles][6]. The results were significant and prove that brilliance can come without the promise of VC funding or an IPO.
The challenge back then was terminal sharing: finding a way for lots of people to access the (very limited number of) available computers. Before there was a scalable answer for that, and long before we had [a shell like Bash][7], there was the Multics project. It was a hypothetical operating system where hundreds or even thousands of developers could share time on the same system. This was a dream of John McCarty, creator of Lisp and the term artificial intelligence (AI), as I [recently explored][8].
Joy Lisi Ranken, author of [_A people's history of computing in the United States_][9], describes what happened next. There was a lot of public interest in driving forward with Multics' vision of more universally available timesharing. Academics, scientists, educators, and some in the broader public were looking forward to this computer-powered future. Many advocated for computing as a public utility, akin to electricity, and the push toward timesharing was a global movement.
Up to that point, high-end mainframes topped out at 40-50 terminals per system. The change of scale was ambitious and eventually failed, as Warren Toomey writes in [IEEE Spectrum][10]:
> "Over five years, AT&amp;T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company's renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&amp;T's corporate leaders decided to pull the plug."
Bell Labs pulled out of the Multics program in 1969. Multics wasn't going to happen.
### The fellowship of the C
Funding wrapped up, and the powerful GE645 mainframe was assigned to other tasks inside Bell Labs. But that didn't discourage everyone.
Among the last holdouts from the Multics project were four men who felt passionately tied to the project: Ken Thompson, Dennis Ritchie, Doug McIlroy, and J.F. Ossanna. These four diehards continued to muse and scribble ideas on paper. Thompson and Ritchie developed a game called Space Travel for the PDP-7 minicomputer. While they were working on that, Thompson started implementing all those crazy hand-written ideas about filesystems they'd developed among the wreckage of Multics.
![A PDP-7 minicomputer][11]
A PDP-7 minicomputer was not top of line technology at the time, but the team implemented foundational technologies that change the future of programming languages and operating systems alike.
That's worth emphasizing: Some of the original filesystem specifications were written by hand and then programmed on what was effectively a toy compared to the systems they were using to build Multics. [Wikipedia's Ken Thompson page][12] dives deeper into what came next:
> "While writing Multics, Thompson created the Bon programming language. He also created a video game called [Space Travel][13]. Later, Bell Labs withdrew from the MULTICS project. In order to go on playing the game, Thompson found an old [PDP-7][14] machine and rewrote Space Travel on it. Eventually, the tools developed by Thompson became the [Unix][15] [operating system][16]: Working on a PDP-7, a team of Bell Labs researchers led by Thompson and Ritchie, and including Rudd Canaday, developed a [hierarchical file system][17], the concepts of [computer processes][18] and [device files][19], a [command-line interpreter][20], [pipes][21] for easy inter-process communication, and some small utility programs. In 1970, [Brian Kernighan][22] suggested the name 'Unix,' in a pun on the name 'Multics.' After initial work on Unix, Thompson decided that Unix needed a system programming language and created [B][23], a precursor to Ritchie's [C][24]."
As Walter Toomey documented in the IEEE Spectrum article mentioned above, Unix showed promise in a way the Multics project never materialized. After winning over the team and doing a lot more programming, the pathway to Unix was paved.
### Getting from B to C in Unix
Thompson quickly created a Unix language he called B. B inherited much from its predecessor BCPL, but it wasn't enough of a breakaway from older languages. B didn't know data types, for starters. It's considered a typeless language, which meant its "Hello World" program looked like this:
```
main( ) {
extrn a, b, c;
putchar(a); putchar(b); putchar(c); putchar('!*n');
}
a 'hell';
b 'o, w';
c 'orld';
```
Even if you're not a programmer, it's clear that carving up strings four characters at a time would be limiting. It's also worth noting that this text is considered the original "Hello World" from Brian Kernighan's 1972 book, [_A tutorial introduction to the language B_][25] (although that claim is not definitive).
[![A diagram showing the key Unix and Unix-like operating systems][26]][27]
Typelessness aside, B's assembly-language counterparts were still yielding programs faster than was possible using the B compiler's threaded-code technique. So, from 1971 to 1973, Ritchie modified B. He added a "character type" and built a new compiler so that it didn't have to use threaded code anymore. After two years of work, B had become C.
### The right abstraction at the right time
C's use of types and ease of compiling down to efficient assembly code made it the perfect language for the rise of minicomputers, which speak in bytecode. B was eventually overtaken by C. Once C became the language of Unix, it became the de facto standard across the budding computer industry. Unix was _the_ sharing platform of the pre-internet days. The more people wrote C, the better it got, and the more it was adopted. It eventually became an open standard itself. According to the [Brief history of C programming language][28]:
> "For many years, the de facto standard for C was the version supplied with the Unix operating system. In the summer of 1983 a committee was established to create an ANSI (American National Standards Institute) standard that would define the C language. The standardization process took six years (much longer than anyone reasonably expected)."
How influential is C today? A [quick review][29] reveals:
* Parts of all major operating systems are written in C, including macOS, Windows, Linux, and Android.
* The world's most prolific databases, including DB2, MySQL, MS SQL, and PostgreSQL, are written in C.
* Many programming-language specifics begun in C, including Python, Go, Perl's core interpreter, and the R statistical language.
Decades after they started as scrappy outsiders, Thompson and Ritchie are praised as titans of the programming world. They shared 1983's Turing Award, and in 1998, received the [National Medal of Science][30] for their work on the C language and Unix. 
![Ritchie and Thompson receiving the National Medal of Technology from President Clinton, 1998][31]
But Doug McIlroy and J.F. Ossanna deserve their share of praise, too. All four of them are true Command Line Heroes.
### Wrapping up the season
[Command Line Heroes][2] has completed an entire season of insights into the programming languages that affect how we code today. It's been a joy to learn about these languages and share them with you. I hope you've enjoyed it as well!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/command-line-heroes-c
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/commnad_line_hereos_ep8_header_opensourcedotcom.png?itok=d7MJQHFJ (In the finale of Command Line Heroes, we learn about the significant impact of C)
[2]: https://www.redhat.com/en/command-line-heroes
[3]: https://opensource.com/tags/command-line-heroes-podcast
[4]: https://opensource.com/sites/default/files/uploads/2482009942_6caea217e0_c.jpg (The original C programming guide by two of the language authors, circa 1978)
[5]: https://en.wikipedia.org/wiki/The_Idea_Factory
[6]: https://opensource.com/open-organization/18/12/what-is-open-leadership
[7]: https://opensource.com/19/9/command-line-heroes-bash
[8]: https://opensource.com/article/19/9/command-line-heroes-lisp
[9]: https://www.hup.harvard.edu/catalog.php?isbn=9780674970977
[10]: https://spectrum.ieee.org/tech-history/cyberspace/the-strange-birth-and-long-life-of-unix
[11]: https://opensource.com/sites/default/files/uploads/800px-pdp7-oslo-2005.jpeg (A PDP-7 minicomputer)
[12]: https://en.wikipedia.org/wiki/Ken_Thompson
[13]: https://en.wikipedia.org/wiki/Space_Travel_(video_game)
[14]: https://en.wikipedia.org/wiki/PDP-7
[15]: https://en.wikipedia.org/wiki/Unix
[16]: https://en.wikipedia.org/wiki/Operating_system
[17]: https://en.wikipedia.org/wiki/File_system#Aspects_of_file_systems
[18]: https://en.wikipedia.org/wiki/Process_(computing)
[19]: https://en.wikipedia.org/wiki/Device_file
[20]: https://en.wikipedia.org/wiki/Command-line_interface#Command-line_interpreter
[21]: https://en.wikipedia.org/wiki/Pipeline_(Unix)
[22]: https://en.wikipedia.org/wiki/Brian_Kernighan
[23]: https://en.wikipedia.org/wiki/B_(programming_language)
[24]: https://en.wikipedia.org/wiki/C_(programming_language)
[25]: https://www.bell-labs.com/usr/dmr/www/btut.pdf
[26]: https://opensource.com/sites/default/files/uploads/640px-unix_history-simple.svg_.png (A diagram showing the key Unix and Unix-like operating systems)
[27]: https://commons.wikimedia.org/w/index.php?curid=1801948
[28]: http://cs-fundamentals.com/c-programming/history-of-c-programming-language.php
[29]: https://www.toptal.com/c/after-all-these-years-the-world-is-still-powered-by-c-programming
[30]: https://www.nsf.gov/od/nms/medal.jsp
[31]: https://opensource.com/sites/default/files/uploads/medal.jpeg (Ritchie and Thompson receiving the National Medal of Technology from President Clinton, 1998)

View File

@ -1,85 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sharing vs. free vs. public: The real definition of open source)
[#]: via: (https://opensource.com/article/19/10/shareware-vs-open-source)
[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
Sharing vs. free vs. public: The real definition of open source
======
If you think open source is synonymous with shareware, freeware, and
public domain, you are not alone.
![Person in a field of dandelions][1]
When you hear the term open source, do you think this is synonymous with terms such as shareware, freeware, or public domain? If so, you are not alone. Many people, both within and without the technology industry, think of these terms as one and the same. This article illustrates how these terms are different and how open source is a transformative licensing and development model. Perhaps the best way to explore the differences will be to share my experience with software provided under one of the above models.
### Shareware and freeware
My early years as a computer programmer started when I began to code in BASIC on my Apple II Plus in 1982. I recall going to the local computer store in my hometown and finding floppy diskettes in plastic bags containing software games and utilities for what seemed to be extraordinarily high prices. Keep in mind, this was from the perspective of a middle-schooler.
There was, however, some software that was available for free or at a minimal price; this was referred to as shareware or freeware, depending on the exact licensing model. Under the shareware model, you could use the software for only a certain amount of time, and/or if you found it useful, then there was a request that you send in a check to the author of that software.
Some shareware software, however, actually encouraged you to also make a copy and give it to your friends. This model is often referred to as freeware. That said, the exact definitions and differences between shareware and freeware are a bit soft, so it's collectively easiest to refer to both simply as "shareware." I cannot say for certain, but I doubt I ever provided money to any of the software authors for using their shareware, mainly because I had no money as an early teenager, but I sure enjoyed using these software programs and learned a lot about computers along the way.
In retrospect, I realize now that I could have learned and accomplished so much more in my growth as a budding programmer if the software had been provided under open source license terms instead of shareware terms. This is because the source code (i.e., the human-readable form of software) is almost never provided with shareware. Shareware also contains licensing restrictions that prohibit the recipient from attempting to reveal the source code. Without access to the source code, it is extraordinarily difficult to learn how the software actually works, making it very difficult to expand or change its functionality. This leaves the end user completely dependent on the original shareware author for any changes or improvements.
With the shareware model, it is practically impossible to enable any community of developers to leverage and further innovate around the code. There can also be further restrictions on redistribution and commercial usage. Although the shareware may be free in terms of price (at least initially), _it is not free in terms of freedom_ and does not allow you to learn and innovate by exploring the inner workings of the code.
Which leads me to the big question: _How is this different from open source software?_
### The basics of open source licensing
First, we need to understand that "open source" refers to a _licensing_ and a _software development model_ that are both significantly different than shareware. Under one form of open source called non-copyleft open source licensing, the user is provided key freedoms such as no restrictions on accessing source code; selling, using, or giving away the software for any purpose; or modifying the software.
This form of license also does not require payment of any fee or royalty for use. One amazing outcome of this licensing model is its unique ability to enable countless software developers to collaborate on new and useful changes and innovations to the code because the license is highly permissive, requiring no negotiations for use. Although the source code is technically not required to be provided under such a license, it is almost always available for everyone to view, learn from, modify, and distribute to others.
Another aspect of non-copyleft open source licensing is that any recipient of such software may add additional license restrictions. This means that the initial author that licensed the code under this form of license has no assurances that the recipient may not further license to others under more restrictive terms. For example:
> _Let us assume an author, Noah, wrote some software and distributed it under a non-copyleft open source license to a recipient, Aviva. Aviva then modifies and improves Noah's software, which she is entitled to do under the non-copyleft open source license terms. Aviva could then decide to add further restrictions to any recipients of her software that could limit its use, such as where or how it may be used (e.g., Aviva could add in a restriction that the software may only be used within the geographical boundaries of California and never in any nuclear power plant). Aviva could also opt to never release the modified source code to others even though she had access to the source code._
Sadly, there are countless proprietary software companies that use non-copyleft open source licensed software in the way described immediately above. In fact, a shareware program could use non-copyleft open source licensed software by adding shareware-type restrictions (e.g., no access to source code or excluding commercial use) thereby converting non-copyleft open source licensed code to a shareware licensing model.
Fortunately, many proprietary companies using non-copyleft open source licensed software see the benefits of releasing source code. These organizations often continue to perpetuate the open source model by providing their modified source code to their recipients or the broader open source community via software repositories like GitHub to enable a virtuous cycle of innovation. This isn't entirely out of benevolence (or at least it normally isn't): These companies want to encourage community innovation and further enhancements, which can benefit them further.
At the same time, many proprietary companies do not opt to do this, which is well within the terms of non-copyleft open source licenses.
### Copyleft-licensed open source software
In 1989, a new open source license named the GNU General Public License, also known commonly as the GPL license, was developed with the objective to ensure that software should be inherently free (as in free speech) and that that these freedoms must always persist, unlike what sometimes happens with non-copyleft open source licensed software. In a unique application of copyright law, the GPL uses copyright law to ensure perpetual software freedoms, so long as the rules are followed (more on that later). This unique use of copyright is called copy**left**.
Like non-copyleft open source software, this license allows recipients to use the software without restriction, examine the source code, change the software, and make further distributions of the original or modified software to other recipients. _Unlike_ a non-copyleft open source license, the copyleft open source license absolutely requires that any recipients are also provided these same freedoms. They can never be taken away unless the rules are not followed.
What makes the copyleft open source license enforceable and an incentive for compliance is the application of copyright law. If one of the recipients of copyleft code does not comply with the license terms (e.g., by adding any additional restrictions on the use of the software or not providing the source code), then their license terminates, and they become a copyright infringer because they no longer have legal permission to use the software. In this way, the software freedoms are ensured for any downstream recipients of that copyleft software.
### Beyond the basics: Other software license models
I mentioned public domain earlier—while it's commonly conflated with open source, this model is a bit different. Public domain means that steps have been taken to see that there are no applicable copyright rights associated with the software, which most often happens when the software copyright expires or is disclaimed by the author. (In many countries, the mechanism to disclaim copyright is unclear, which is why some public domain software may provide an option to obtain an open source-type license as a fallback.) No license is required to use public domain software; whether this makes it "open source" or not is the subject of much debate, though many would consider public domain a form of open source if the source code were made available.
Interestingly, there are a significant number of open source projects that make use of small modules of public domain software for certain functions. There are even entire programs that claim to be in the public domain, such as SQLite, which implements a SQL database engine and is used in many applications and devices. It is also common to see software with no license terms.
Many people incorrectly assume that such unlicensed software is open source, in the public domain, or otherwise free to use without restriction. In most countries, including the United States, copyright in software exists when it is created. This means that it cannot be used without permission in the form of a license, unless the copyright is somehow disclaimed, rendering it in the public domain. Some exceptions exist to this general rule, like the laws of implied licenses or fair use, but these are quite complex in how they may apply to a specific situation. I do not recommend providing software with no license terms when the intention is for it to be under open source license terms as this leads to confusion and potential misuse.
### Benefits of open source software
As I said previously, open source enables an efficient software development model with enormous ability to drive innovation. But what does this really mean?
One of the benefits of the open source licensing model is a significant reduction in the friction around innovation, especially innovation done by other users beyond the original creator. This friction is limited because using open source code generally does not require the negotiation of license terms, thereby greatly simplifying and lowering any cost burden for use. In turn, this creates a type of open source ecosystem that encourages rapid modification and combination of existing technologies to form something new. These changes are often provided back into this open source ecosystem, creating a cycle of innovation.
There is an innumerable number of software programs that run everything from your toaster to Mars-going spacecraft that are the direct result of this effortless ability to combine various programs together… all enabled by the open source development model.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/shareware-vs-open-source
作者:[Jeffrey Robert Kaufman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jkaufman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_dandelion_520x292.png?itok=-xhFQvUj (Person in a field of dandelions)

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why to choose Rust as your next programming language)
[#]: via: (https://opensource.com/article/19/10/choose-rust-programming-language)
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
Why to choose Rust as your next programming language
======
Selecting a programming language can be complicated, but some
enterprises are finding that switching to Rust is a relatively easy
decision.
![Programming books on a shelf][1]
Choosing a programming language for a project is often a complicated decision, particularly when it involves switching from one language to another. For many programmers, it is not only a technical exercise but also a deeply emotional one. The lack of known or measurable criteria for picking a language often means the choice digresses into a series of emotional appeals.
I've been involved in many discussions about choosing a programming language, and they usually conclude in one of two ways: either the decision is made using measurable, yet unimportant criteria while ignoring relevant, yet hard to measure criteria; or it is made using anecdotes and emotional appeals.
There has been one language selection process that I've been a part of that has gone—at least so far—rather smoothly: the growing [consideration inside Microsoft][2] for using [Rust][3].
This article will explore several issues related to choosing a programming language in general and Rust in particular. They are: What are the criteria usually used for selecting a programming language, especially in large businesses, and why does this process rarely end successfully? Why has the consideration of Rust in Microsoft gone smoothly so far, and are there some general best practices that can be gleaned from it?
### Criteria for choosing a language
There are many criteria for deciding whether to switch to a new programming language. In general, the criteria that are most easily measured are the ones that are most often talked about, even if they are less important than other, more difficult-to-measure criteria.
#### Technical criteria
The first group of criteria are the technical considerations; they are often the first that come to mind because they are the easiest to measure.
Interestingly, the technical costs (e.g., build system integration, monitoring, tooling, support libraries, and more) are often easier to measure than the technical benefits. This is especially detrimental to the adoption of new programming languages, as the downsides of adoption are often the clearest part of the picture.
While some technical benefits (like performance) can be measured relatively easily, others are much harder to measure. For example, what are the relative merits of a dynamic typing system (like in Python) to a relatively verbose and feature-poor system (like Java), and how does this change when compared to stronger typed systems like Scala or Haskell? Many people have strong gut feelings that such technical differences should be taken very seriously in language considerations, but they are no good ways to measure them.
A side effect of the discrepancy in measurement ease is that the easiest-to-measure items are often given the most weight in the decision-making process even if that would not be the case with perfect information. This not only throws off the cost/benefit analysis but also the process of assigning importance to different costs and benefits.
#### Organizational criteria
Organizational criteria, which are the second consideration, include:
* How easy will it be to hire developers in this language?
* How easy is it to enforce programming standards?
* How quickly, on average, will developers be able to deliver software?
Costs and benefits of organizational criteria are hard to measure. People usually have vague, "gut feeling" answers to them, which create strong opinions on the matter. Unfortunately, however, it's often very difficult to measure these criteria. For example, it might be obvious to most that TypeScript allows programmers to deliver functioning, relatively bug-free software to customers more quickly than C does, but where is the data to back this up?
Moreover, it's often extremely difficult to assign importance weights to these criteria. It's easy to see that Go enforces standardized coding practices more easily than Scala (due to the wide use of gofmt), but it is extremely difficult to measure the concrete benefits to a company from standardizing codebases.
These criteria are still extremely important but, because of the difficulty in measuring them, they are often either ignored or reasoned about through anecdotes.
#### Emotional criteria
Third are the emotional criteria, which tend to be overlooked if not outright dismissed.
Software programming has traditionally tried to emulate more true "engineering" practices, where technical considerations are generally the most important. Some would argue that programming languages are "just tools" and should be measured only against technical criteria. Others would argue that programming languages assist the programmer in some of the more artistic aspects of the job. These criteria are extremely difficult to measure in any meaningful way.
In general, this comes down to how happy (and thus productive) programmers feel using this language. Such considerations can have a real impact on programmers, but how this translates to benefitting to an entire team is next to impossible to measure.
Because of the difficulty of quantifying these criteria, this is often ignored. But does this mean that emotional considerations of programming languages have no significant impact on programmers or programming organizations?
#### Unknown criteria
Finally, there's a set of criteria that are often overlooked because a new programming language is usually judged by the criteria set by the language currently in use. New languages may have capabilities that have no equivalent in other languages, so many people will not be familiar with them. Having no exposure to those capabilities may mean the evaluator unknowingly ignores or downplays them.
These criteria can be technical (e.g., the merits of Kotlin data classes over Java constructs), organizational (e.g., how helpful Elm error messages are for teaching those new to the language), or emotional (e.g., the way Ruby makes the programmer feel when writing it).
Because these aspects are hard to measure, and someone completely unfamiliar with them has no existing framework for judging them based on experience, intuition, or anecdote, they are often undervalued versus more well-understood criteria—if not outright ignored.
### Why Rust?
This brings us back to the growing excitement for Rust in Microsoft. I believe the discussions around Rust adoption have gone relatively smoothly so far because Rust offers an extremely clear and compelling advantage—not only over the language it seeks to replace (C++)—but also over any other language practically available to industry: great performance, a high level of control, and being memory safe.
Microsoft's decision to investigate Rust (and other languages) began due to the fact that roughly [70% of Common Vulnerabilities and Exposures][4] (CVEs) in Microsoft products were related to memory safety issues in C and C++. When it was discovered that most of the affected codebases could not be effectively rewritten in C# because of performance concerns, the search began. Rust was viewed as the only possible candidate to replace C++. It was similar enough that not everything had to be reworked, but it has a differentiator that makes it measurably better than the current alternative: being able to eliminate nearly 70% of Microsoft's most serious security vulnerabilities.
There are other reasons beyond memory safety, performance, and control that make Rust appealing (e.g., strong type safety guarantees, being an extremely loved language, etc.) but as expected, they were hard to talk about because they were hard to measure. In general, most people involved in the selection process were more interested in verifying that these other aspects of the language weren't perceivably worse than C++ but, because measuring these aspects was so difficult, they weren't considered active reasons to adopt the language.
However, the Microsoft teams that had already adopted Rust, like for the [IoT Edge Security Daemon][5], touted other aspects of the language (particularly "correctness" due to the advanced type system) as the reasons they were most keen on investing more in the language. These teams couldn't provide reliable measurements for these criteria, but they had clearly developed an intuition that this aspect of the language was extremely important.
With Rust at Microsoft, the main criteria being judged happened to be an easily measurable one. But what happens when an organization's most important issues are hard to measure? These issues are no less important just because they are currently difficult to measure.
### What now?
Having clearly measurable criteria is important when adopting a new programming language, but this does not mean that hard-to-measure criteria aren't real and shouldn't be taken seriously. We simply lack the tools to evaluate new languages holistically.
There has been some research into this question, but it has not yet produced anything that has been widely adopted by industry. While the case for Rust was relatively clear inside Microsoft, this doesn't mean new languages should be adopted only where there is one clear, technical reason to do so. We should become better at evaluating more aspects of programming languages beyond just the traditional ones (such as performance).
The path to Rust adoption is just beginning at Microsoft, and having just one reason to justify investment in Rust is definitely not ideal. While we're beginning to form collective, anecdotal evidence to justify Rust adoption further, there is definitely a need to quantify this understanding better and be able to talk about it in more objective terms.
We're still not quite sure how to do this, but stay tuned for more as we go down this path.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/choose-rust-programming-language
作者:[Ryan Levick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ryanlevick
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf)
[2]: https://msrc-blog.microsoft.com/tag/rust
[3]: https://www.rust-lang.org/
[4]: https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2019_02_BlueHatIL/2019_01%20-%20BlueHatIL%20-%20Trends%2C%20challenge%2C%20and%20shifts%20in%20software%20vulnerability%20mitigation.pdf
[5]: https://msrc-blog.microsoft.com/2019/09/30/building-the-azure-iot-edge-security-daemon-in-rust/

View File

@ -1,65 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Everything you need to know about Grace Hopper in six books)
[#]: via: (https://opensource.com/article/19/10/grace-hopper-books)
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
Everything you need to know about Grace Hopper in six books
======
A reading list for people of all ages about the legendary Queen of Code.
![Book list, favorites][1]
Grace Hopper is one of those iconic figures that really needs no introduction. During her long career in the United States Navy, she was a key figure in the early days of modern computing. If you have been involved in open source or technology in general, chances are you have already heard several anecdotes about Grace Hopper. The story of finding [the first computer bug][2], perhaps? Or maybe you have heard some of her nicknames: Queen of Code, Amazing Grace, or Grandma COBOL?
While computing has certainly changed from the days of punch cards, Grace Hopper's legacy lives on. She was posthumously awarded a Presidential Medal of Freedom, the Navy named a warship after her, and the [Grace Hopper Celebration][3] is an annual conference with an emphasis on topics that are relevant to women in computing. Suffice it to say, Grace Hopper's name is going to live on for a very long time.
Grace Hopper had a career anyone should be proud of, and she accomplished many great things. Like many historical figures who have accomplished great things, sometimes the anecdotes about her contributions start to drift towards the realm of tall tales, which does Grace Hopper a disservice. Her real accomplishments are already legendary, and there is no reason to try to turn her into the computer science version of [John Henry][4] or [Paul Bunyan][5].
To that end, here are six books that explore the life and legacy of Grace Hopper. No tall tales, just story after story of Grace Hopper, a woman who changed the world.
##
[broad_band_150.jpg][6]
![Broad Band book cover][7]
Broad Band: The Untold Story of the Women Who Made the Internet
### by Claire L. Evans
In [_Broad Band: The Untold Story of the Women Who Made the Internet_][8], Claire L. Evans explores the lives of several women whose contributions to technology helped to shape the internet. Starting with Ada Lovelace and moving towards modern times with Grace Hopper and others, Evans weaves an interesting narrative that highlights the roles various women played in early computing. While only part of the book focuses on Grace Hopper, the overarching narrative of Evans's work does an excellent job of showcasing Hopper's place in computing history.
##
[grace_hopper_admiral_of_the_cyber_sea_150.jpg][9]
![Grace Hopper: Admiral of the Cyber Sea cover][10]
I sat down with Leslie Hawthorn , Community Manager at Red Hat, and chatted with her about the 2012...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/grace-hopper-books
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites)
[2]: https://www.computerhistory.org/tdih/september/9/
[3]: https://ghc.anitab.org/
[4]: https://en.wikipedia.org/wiki/John_Henry_(folklore)
[5]: https://en.wikipedia.org/wiki/Paul_Bunyan
[6]: https://opensource.com/file/453331
[7]: https://opensource.com/sites/default/files/uploads/broad_band_150.jpg (Broad Band book cover)
[8]: https://clairelevans.com/
[9]: https://opensource.com/file/453336
[10]: https://opensource.com/sites/default/files/uploads/grace_hopper_admiral_of_the_cyber_sea_150.jpg (Grace Hopper: Admiral of the Cyber Sea cover)

View File

@ -1,117 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Measuring the business value of open source communities)
[#]: via: (https://opensource.com/article/19/10/measuring-business-value-open-source)
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
Measuring the business value of open source communities
======
Corporate constituencies are interested in finding out the business
value of open source communities. Find out how to answer key questions
with the right metrics.
![Lots of people in a crowd.][1]
In _[Measuring the health of open source communities][2]_, I covered some of the key questions and metrics that weve explored as part of the [CHAOSS project][3] as they relate to project founders, maintainers, and contributors. In this article, we focus on open source corporate constituents (such as open source program offices, business risk and legal teams, human resources, and others) and end users.
Where the bulk of the metrics for core project teams are quantitative, for the remaining constituents our metrics must reflect a much broader range of interests, and address many more qualitative measures. From the metrics collection standpoint, much of the data collection for qualitative measures is much more manual and subjective, but it is nonetheless within the scope CHAOSS hopes to be able to address as the project matures.
While people on the business side of things do sometimes care about the metrics in use by the project itself, there are only two fundamental questions that corporate constituencies have. The first is about _value_: "Will this choice help our business make more money sooner?" The second is about _risk_: "Will this choice hurt our businesss chances of making money?"
Those questions can come in many different iterations across disciplines, from human resources to legal counsel and executive offices. But, at the end of the day, having answers that are based on data can make open source engagement more efficient, effective, and less risky.
Once again, the information below is structured in a Goal-Question-Metric format:
* Open source program offices (OSPOs)
* As an OSPO leader, I care about prioritizing our resources toward healthy communities:
* How [active][4] is the community?
**Metric:** [Code development][5] \- The number of commits and pull requests, review time for new code commits and pull requests, code reviews and merges, the number of accepted vs. rejected pull requests, and the frequency of new version releases.
**Metric:** [Issue resolution][6] \- The number of new issues, closed issues, the ratio of new vs. closed issues, and the average open time per issue.
**Metric:** Social - Social media mention counts, social media sentiment analysis, the activity of community blog, and news releases (_future release_).
* What is the [value][7] of our contributions to the project? (This is an area in active development.)
**Metric:** Time value - Time saved for training developers on new technologies, and time saved maintaining custom development once the improvements are upstreamed.
**Metric:** Dollar value - How much would it have cost to maintain changes and custom solutions internally, versus contributing upstream and ensuring compatibility with future community releases
* What is the value of contributions to the project by other contributors and organizations?
**Metric:** Time value - Time to market, new community-developed features released, and support for the project by the community versus the company.
**Metric:** Dollar value - How much would it cost to internally rebuild the features provided by the community, and what is the opportunity cost of lagging behind innovations in open source projects?
* Downstream value: How many other projects list our project as a dependency?
**Metric:** The value of the ecosystem that is around a project.
* How many forks of our project have there been?
**Metric:** Are core developers more active in the mainline or a fork?
**Metric:** Are the forks contributing back to the mainline, or developing in new directions?
* Engineering leadership
* As an approving architect, I care most about good design patterns that introduce a minimum of technical debt.
**Metric:** [Test Coverage][8] \- What percentage of the code is tested?
**Metric:** What is the percentage of code undergoing code reviews?
**Metric:** Does the project follow [Core][9] [Infrastructure][9] [Initiative (CII) Best Practices][9]?
* As an engineering executive, I care most about minimizing time-to-market and bugs, and maximizing platform stability and reliability.
**Metric:** The defect resolution velocity.
**Metric:** The defect density.
**Metric:** The feature development velocity.
* I also want social proofs that give me a level of comfort.
**Metric:** Sentiment analysis of social media related to the project.
**Metric:** Count of white papers.
**Metric:** Code Stability - Project version numbers and the frequency of new releases.
There is also the issue of legal counsel. This goal statement is: "As legal counsel, I care most about minimizing our companys chances of getting sued." The question is: "What kind of license does the software have, and what obligations do we have under the license?"
The metrics involved here are:
* **Metric:** [License Count][10] \- How many different licenses are declared in a given project?
* **Metric:** [License Declaration][11] \- What kinds of licenses are declared in a given project?
* **Metric:** [License Coverage][12] \- How much of a given codebase is covered by the declared license?
Lastly, there are further goals our project is considering to measure the impact of corporate open source policy as it relates to talent acquisition and retention. The goal for human resource managers is: "As an HR manager, I want to attract and retain the best talent I can." The questions and metrics are as follows:
* What impact do our open source policies have on talent acquisition?
**Metric:** Talent acquisition - Measure over time how many candidates report that its important to them that they get to work with open source technologies.
* What impact do our open source policies have on talent retention?
**Metric:** Talent retention - Measure how much employee churn can be reduced because of people being able to work with or use open source technologies.
* What is the impact on training employees who can learn from engaging in open source projects?
**Metric:** Talent development - Measure over time the importance to employees of being able to use open source tech effectively.
* How does allowing employees to work in a community outside of the company impact job satisfaction?
**Metric:** Talent satisfaction - Measure over time the importance to employees of being able to contribute to open source tech.
**Source:** Internal surveys.
**Source:** Exit interviews. Did our policies around open source technologies at all influence your decision to leave?
### Wrapping up
It is still the early days of building a platform for bringing together these disparate data sources. The CHAOSS core of [Augur][13] and [GrimoireLab][14] currently supports over two dozen sources, and Im excited to see what lies ahead for this project.
As the CHAOSS frameworks mature, Im optimistic that teams and projects that implement these types of measurement will be able to make better real-world decisions that result in healthier and more productive software development lifecycles.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/measuring-business-value-open-source
作者:[Jon Lawrence][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/the3rdlaw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m (Lots of people in a crowd.)
[2]: https://opensource.com/article/19/8/measure-project
[3]: https://github.com/chaoss/
[4]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/community_growth.md
[5]: https://github.com/chaoss/wg-evolution#metrics
[6]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/issue_resolution.md
[7]: https://github.com/chaoss/wg-value
[8]: https://chaoss.community/metric-test-coverage/
[9]: https://github.com/coreinfrastructure/best-practices-badge
[10]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Count.md
[11]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Declared.md
[12]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Coverage.md
[13]: https://github.com/chaoss/augur
[14]: https://github.com/chaoss/grimoirelab

View File

@ -1,168 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 reasons why I love Python)
[#]: via: (https://opensource.com/article/19/10/why-love-python)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
5 reasons why I love Python
======
These are a few of my favorite things about Python.
![Snake charmer cartoon with a yellow snake and a blue snake][1]
I have been using Python since it was a little-known language in 1998. It was a time when [Perl was quite popular][2] in the open source world, but I believed in Python from the moment I found it. My parents like to remind me that I used to say things like, "Python is going to be a big deal" and "I'll be able to find a job using it one day."** **It took a while, but my predictions came true.
There is so much to love about the language. Here are my top 5 reasons why I continue to love Python so much (in reverse order, to build anticipation).
### 5\. Python reads like executable pseudocode
Pseudocode is the concept of writing out programming logic without it following the exact syntax and grammar of a specific language. I have stopped writing much pseudocode since becoming a Python programmer because its actual design meets my needs.
Python can be easy to read even if you don't know the language well and that is very much by design. It is reasonably famous for whitespace requirements for code to be able to run. Whitespace is necessary for any languageit allows us to see each of the words in this sentence as distinct. Most languages have suggestions or  "best practices" around whitespace usage, but Python takes a bold step by requiring standardization. For me, that makes it incredibly straightforward to read through code and see exactly what it's doing.
For example, here is an implementation of the classic [bubble sort algorithm][3].
```
def bubble_sort(things):
    needs_pass = True
    while needs_pass:
        needs_pass = False
        for idx in range(1, len(things)):
            if things[idx - 1] &gt; things[idx]:
                things[idx - 1], things[idx] = things[idx], things[idx - 1]
                needs_pass = True
```
Now let's compare that with [this implementation][4] in Java.
```
public static int[] bubblesort(int[] numbers) {
    boolean swapped = true;
    for(int i = numbers.length - 1; i &gt; 0 &amp;&amp; swapped; i--) {
        swapped = false;
        for (int j = 0; j &lt; i; j++) {
            if (numbers[j] &gt; numbers[j+1]) {
                int temp = numbers[j];
                numbers[j] = numbers[j+1];
                numbers[j+1] = temp;
                swapped = true;
            }
        }
    }
    return numbers;
}
```
I appreciate that Python requires indentation to indicate nesting of blocks. While our Java example also uses indentation quite nicely, it is not required. The curly brackets are what determine the beginning and end of the block, not the spacing. Since Python uses whitespace as syntax, there is no need for beginning **{** and end **}** notation throughout the other code. 
Python also avoids the need for semicolons, which is a [syntactic sugar][5] needed to make other languages human-readable. Python is much easier to read on my eyes and it feels so close to pseudocode it sometimes surprises me what is runnable!
### 4\. Python has powerful primitives
In programming language design, a primitive is the simplest available element. The fact that Python is easy to read does _not_ mean it is not a powerful language, and that stems from its use of primitives. My favorite example of what makes Python both easy to use and advanced is its concept of **generators**. 
Imagine you have a simple binary tree structure with `value`, `left`, and `right`. You want to easily iterate over it in order. You usually are looking for "small" elements, in order to exit as soon as the right value is found. That sounds simple so far. However, there are many kinds of algorithms to make a decision on the element.
Other languages would have you write a **visitor**, where you invert control by putting your "is this the right element?" in a function and call it via function pointers. You _can_ do this in Python. But you don't have to.
```
def in_order(tree):
    if tree is None:
        return
    yield from in_order(tree.left)
    yield tree.value
    yield from in_order(tree.right)
```
This _generator function_ will return an iterator that, if used in a **for** loop, will only execute as much as needed but no more. That's powerful.
### 3\. The Python standard library
Python has a great standard library with many hidden gems I did not know about until I took the time to [walk through the list of all available][6] functions, constants, types, and much more. One of my personal favorites is the `itertools` module, which is listed under the functional programming modules (yes, [Python supports functional programming][7]!).
It is great for playing jokes on your tech interviewer, for example with this nifty little solution to the classic [FizzBuzz interview question][8]:
```
fizz = itertools.cycle(itertools.chain(['Fizz'], itertools.repeat('', 2)))
buzz = itertools.cycle(itertools.chain(['Buzz'], itertools.repeat('', 4)))
fizz_buzz = map(operator.add, fizz, buzz)
numbers = itertools.islice(itertools.count(), 100)
combo = zip(fizz_buzz, numbers)
for fzbz, n in combo:
    print(fzbz or n)
```
A quick web search will show that this is not the most straight-forward way to solve for FizzBuzz, but it sure is fun!
Beyond jokes, the `itertools` module, as well as the `heapq` and `functools` modules are a trove of treasures that come by default in your Python implementation.
### 2\. The Python ecosystem is massive
For everything that is not in the standard library, there is an enormous ecosystem to support the new Pythonista, from exciting packages to text editor plugins specifically for the language. With around 200,000 projects hosted on PyPi (at the time of writing) and growing, there is something for everyone: [data science][9], [async frameworks][10], [web frameworks][11], or just tools to make [remote automation][12] easier.
### 1\. The Python community is special
The Python community is amazing. It was one of the first to adopt a code of conduct, first for the [Python Software Foundation][13] and then for [PyCon][14]. There is a real commitment to diversity and inclusion: blog posts and conference talks on this theme are frequent, thoughtful, and well-read by Python community members.
While the community is global, there is a lot of great activity in the local community as well. Local Python meet-ups are a great place to meet wonderful people who are smart, experienced, and eager to help. A lot of meet-ups will explicitly have time set aside for experienced people to help newcomers who want to learn a new concept or to get past an issue with their code. My local community took the time to support me as I began my Python journey, and I am privileged to continue to give back to new developers.
Whether you can attend a local community meet-up or you spend time with the [online Python community][15] across IRC, Slack, and Twitter, I am sure you will meet lovely people who want to help you succeed as a developer. 
### Wrapping it up
There is so much to love about Python, and now you know my favorite part is definitely the people.
I have found kind, thoughtful Pythonistas in the community throughout the world, and the amount of community investment provide to those in need is incredibly encouraging. In addition to those I've met, the simple, clean, and powerful Python language gives any developer more than enough to master on their journey toward a career in software development or as a hobbyist enjoying playing around with a fun language. If you are interested in learning your first or a new language, consider Python and let me know how I can help.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/why-love-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Snake charmer cartoon with a yellow snake and a blue snake)
[2]: https://opensource.com/article/19/8/command-line-heroes-perl
[3]: https://en.wikipedia.org/wiki/Bubble_sort
[4]: https://en.wikibooks.org/wiki/Algorithm_Implementation/Sorting/Bubble_sort#Java
[5]: https://en.wikipedia.org/wiki/Syntactic_sugar
[6]: https://docs.python.org/3/library/
[7]: https://opensource.com/article/19/10/python-programming-paradigms
[8]: https://en.wikipedia.org/wiki/Fizz_buzz
[9]: https://pypi.org/project/pandas/
[10]: https://pypi.org/project/Twisted/
[11]: https://pypi.org/project/Django/
[12]: https://pypi.org/project/paramiko/
[13]: https://www.python.org/psf/conduct/
[14]: https://us.pycon.org/2019/about/code-of-conduct/
[15]: https://www.python.org/community/

View File

@ -1,88 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to get started with open source in 2020)
[#]: via: (https://opensource.com/article/20/1/getting-started-open-source)
[#]: author: (Bryant Son https://opensource.com/users/brson)
How to get started with open source in 2020
======
New to open source? Opensource.com's top 10 articles for newcomers will
get you on the right pathway quickly in the new year.
![pipe in a building][1]
When Opensource.com launched in 2010, Red Hat CEO Jim Whitehurst said the site "is one of the ways in which Red Hat gives something back to the open source community." And that community has always included the growing number of people who are new to open source.
In 2019, we published many articles about the open source way of thinking, choosing hardware, the contribution process, and other topics geared toward newbies. If you're new to open source, this list of Opensource.com's top 10 articles from 2019 about getting started with open source should put you on the right path.
### Why I made the switch from Mac to Linux
Have you ever considered trying out Linux but were not sure how to start? You are not alone! Trying something new is usually a little scary and involves a learning curve. In [_Why I made the switch from Mac to Linux_][2], Matthew Broberg shares his story about adopting Linux and how his initial nervousness was transformed into an awesome feeling of accomplishment.
### Getting started with Git: Terminology 101
Although there are many ways to contribute to open source (including writing about it, like I'm doing here), the most notable contributions come from the developers who provide source code to projects. They usually make their source code contributions to GitHub and GitLab repositories using the Git tool. Matthew Broberg's guide to [_Getting started with Git: Terminology 101_][3] explains how to get started with Git so you can make your first commit to your favorite open source project.
### Buying a Linux-ready laptop
Most people who want to try Linux are already familiar with Microsoft Windows or Apple MacOS, and they may know they can install Linux on their existing computer, for example, by using a virtual machine, partitioning their drive to install Linux alongside Windows or Mac, reformatting their drive to erase their old operating system and install Linux, or putting Linux on a second drive. But many may not know they can buy a Linux-by-default laptop from companies like System76, Slimbook, and Tuxedo. In [_Buying a Linux-ready laptop_][4], Richardo Berlasso shares his experience of ordering, receiving, and using a Linux-ready Tuxedo laptop.
### Getting started with Vim: The basics
Vim is an improved version of the vi text editor. It's is available by default on most Linux operating systems and competes with Emacs, another popular Linux text editor. Knowing how to use Vim can give you an edge in creating, modifying, and managing text-based files, whether they are programming files or simple words on a screen. In [_Getting started with Vim: The basics_][5], I walk through how to start learning Vim to simplify your open source journey.
### How to create a pull request in GitHub
Pull request, often shortened to PR, is a Git term that means something is available to be merged into another branch. Pull requests are an essential part of the open source contribution process: to contribute to an open source project, people fork or clone a branch to work on it, and the PR process is how they later merge their work back to the parent branch. [_How to create a pull request in GitHub_][6] by Kedar Vijay Kulkarni will give you a good foundation of knowledge to make your first pull request.
### Bash vs. Python: Which language should you use?
Of the many programming languages out there, Python is definitely one of the hottest, driven mostly by the growth of data science. But for automation engineers, Bash always has been the primary script language to get the job done. So, what can Python do that Bash cannot? What are some Bash tasks that Python can't replace? Learn the differences by reading [_Bash vs. Python: Which language should you use?_][7] by Archit Modi.
### How to use Ansible to document procedures
Ansible is a very popular and powerful infrastructure-as-code tool. Many enterprises rely on it to automate tasks in their cloud platforms. Among the countless things Ansible can do, one of the least obvious is the one Marco Bravo explains: [_How to use Ansible to document procedures_][8].
### A dozen ways to learn Python
Learning a programming language is always a daunting task. But Python has a number of features that make the learning process easy. Contributor Don Watkins offers [_A dozen ways to learn Python_][9] to take some of the stress out of the journey from getting started to becoming a Python expert.
### What's the best Linux distribution for beginners?
Everyone has probably heard of Android, the most widely used Linux-based mobile operating system. And many have heard of Red Hat Enterprise Linux (RHEL) and Ubuntu. Picking a Linux operating system can be difficult, but Lauren Pritchett's poll [_What's the best Linux distribution for beginners?_][10] might help you pick the right one based on the community's input. By the way, while you're there, make sure to vote for your favorite Linux distro.
### My first contribution to open source: Impostor Syndrome
Contributing to open source can change your life in positive ways, but you can't ignore the technical challenges around it. Have you ever heard someone say, "Hey, starting with open source is a piece of cake? Everyone can do it!" Probably not, since it's not exactly true. Galen Corey shares some of the challenges he confronted in [_My first contribution to open source: Impostor Syndrome_][11].
### What else do you need to get started?
There are a lot of topics related to getting started with open source, which also means there are a lot of opportunities for Opensource.com to give back to new users by publishing articles to help them. Do you have ideas for other articles we should cover in 2020? Please share your suggestions in the comments, or even consider [writing an article][12] about your own open source journey.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/getting-started-open-source
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_pipe_red_hat_tower_building.png?itok=8ho3yi7L (pipe in a building)
[2]: https://opensource.com/article/19/10/why-switch-mac-linux
[3]: https://opensource.com/article/19/2/git-terminology
[4]: https://opensource.com/article/19/7/linux-laptop
[5]: https://opensource.com/article/19/3/getting-started-vim
[6]: https://opensource.com/article/19/7/create-pull-request-github
[7]: https://opensource.com/article/19/4/bash-vs-python
[8]: https://opensource.com/article/19/4/ansible-procedures
[9]: https://opensource.com/article/19/8/dozen-ways-learn-python
[10]: https://opensource.com/article/19/10/linux-distribution-beginners
[11]: https://opensource.com/article/19/11/my-first-open-source-contribution-impostor-syndrome
[12]: https://opensource.com/how-submit-article

View File

@ -1,100 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps is a solution to burnout worth investing in)
[#]: via: (https://opensource.com/article/20/1/devops-burnout-solution)
[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
DevOps is a solution to burnout worth investing in
======
Instead of treating burnout once it kicks in, we need to do more to
prevent it in the first place. Here is a reminder of the cause and a
look at solutions.
![A person choosing between several different incentives][1]
Not a day goes by that I don't see a tweet or hear somebody talking about burnout. Burnout is becoming a pervasive part of our lives, especially in tech and open source communities. In [_What you need to know about burnout in open source communities_,][2] I defined burnout and explored its causes, symptoms, and potential treatments. But a better question is about prevention: How do we change the underlying processes, cultures, and tools to prevent burnout from occurring in the first place?
### Burnout's catalyst is unplanned work
Let's start by reviewing what's known about the cause of burnout. In 2019, PagerDuty published [_Unplanned work: The human impact of an always-on world_][3]. More than 35% of the study's respondents are considering leaving their jobs due to stress and work/life balance issues, also known as burnout. Companies that utilize automation and have documented response plans have fewer unplanned incidents and less stressed employees.
Modern software organizations use automation and documented response plans to move faster. Moving faster is necessary to stay competitive. We are in this endless cycle where customers expect more, which puts more pressure on companies to deliver more and deliver it faster, which in turn creates pressure on employees.
However, it is possible to move fast while having protections to prevent unplanned work and burnout. The Accelerate State of DevOps Report has been tracking trends in DevOps for six years. It allows teams to benchmark against others in the industry as low, medium, high, or elite performers. One of the findings from the [2019 State of DevOps report][4] was: "Productivity can drive improvements in work/life balance and reductions in burnout, and organizations can make smart investments to support it."
Productivity means more work gets done. Therefore, more value is delivered. The catch to productivity is balance: Don't do more work in the short term at the expense of burning out your people. Processes and tooling need to be in place to prevent people from feeling overworked and overwhelmed.
To support productivity that does not lead to burnout, organizations need to make smart investments in tooling and reduce technical debt. Investing in tooling means purchasing useful and easy-to-use solutions. The preference for building rather than buying may lead productivity to decrease and burnout to emerge. How? Instead of focusing on building features that are competitive differentiators and help the company achieve key business objectives, the developers spend countless hours trying to build something that a vendor could have quickly provided.
As developers spend time building non-core solutions, technical debt accrues, and features are pushed out. Instead of building all the things, buy the tooling that supports the business but is not strategic, and build the things that are core to your business strategy. You wouldn't use development resources to build your own email program, so treat other tooling the same way. Twenty percent of tools used by low-performing teams are developed primarily in-house and proprietary to the organization, compared to 5% to 6% in medium, high, and elite teams.
### Worthwhile solutions to burnout
If you want to prevent burnout, here are some areas to invest in. It's no coincidence they overlap with frequent discussions [in DevOps articles][5].
#### Communication and collaboration
Communication is at the heart of everything we do. [Laurie Barth][6] sums it up nicely: "Over time, I've learned that the biggest source of failure is often due to people and teams. A lack of communication and coordination can cause serious problems." Use tools like videoconferencing, Confluence, and Slack to ensure communication and collaboration happen.
But create rules around the use of these tools. Make sure to turn off Slack notifications during off-hours. I disable my notifications from 6pm to 8am.
Define what type of communication is best for which situations. Slack is useful for real-time, ephemeral communication, but it can lead to people feeling the need to always be on. If they're not online, they may miss an important conversation. If major or minor decisions are made in a Slack thread, document those in a longer-living system of record, giving all team members access to the necessary information.
Trying to debug an incident? Communicate via Slack. Need to write up a post-incident review? Post that to Confluence or a wiki.
Videoconferencing tools like Zoom or BlueJeans help enable remote work. Having the ability to work remotely, part-time or full-time, can reduce the stress of commuting or relocating. Videoconferences make it easy to stay connected with distributed teams because sometimes it is easier to hash things out in a face-to-face conversation than over email or Slack.
These tools should not be used to encourage people to work while on vacation. Time off means time away from work to rest, recover, and recharge.
#### Releases and deploys
According to the 2019 State of DevOps report, elite teams deploy code 208 times more frequently than low performers, and their lead time from committing code to deployment is 106 times faster. It may seem that the more deploys you do, the greater the likelihood of burnout, but that isn't necessarily the case. Teams that utilize continuous delivery have processes in place to deploy safely.
First, separate releases from deploys—just because you deployed code doesn't mean that all users should have access to it. Ring deploys make features available to a small group of people, like internal employees or beta customers. These users can help you identify bugs or provide feedback to fine-tune a feature before releasing it widely.
Next, create feedback loops regarding the quality of a deployment. Things don't always go as planned when deploying your code. You need the ability to rapidly stop when things go wrong. Feedback loops include implementing monitoring and observability tools. By using telemetry data along with kill switches, you can quickly turn off a poorly behaving feature rather than rolling back an entire deployment.
Finally, run A/B tests and experiments to learn what customers respond to. A metrics-based approach provides insight into what works and what doesn't and can help you validate a hypothesis. Instead of creating technical debt with a partial feature, collect data to see if the feature provides the expected conversions early on. If it doesn't, don't release it.
#### Incident resolution
Part of resolving incidents means knowing what to do when something breaks. And constantly putting out fires can lead to burnout. We can't prevent all incidents from happening, but we can be better prepared. Running chaos experiments or game days with tools like Gremlin can help companies prepare for the unexpected.
With chaos experiments, you can learn how your systems, services, and applications respond under specific scenarios. Knowing how things behave when they're broken can shorten incident-resolution times. They can also help you identify and fix vulnerabilities before an incident occurs.
What can you automate to reduce toil during incident resolution? For example, when you're actively working on an incident, can a Slack channel dedicated to the incident be automatically generated? Or can you create [feature flags][7] with a solution like LaunchDarkly (full disclosure: I work there) to perform common tasks during incident resolution? These could include:
* Dynamic configuration changes, like automatically adjusting logging levels to collect more information when an alert is triggered
* Load-shedding to disable non-critical elements when systems are under heavy load to ensure essential tasks complete
* Kill switches or circuit breakers to turn off features when they are impacting your service reliability
### It's not magic
There is no magic bullet to resolve burnout; it requires having the right people, processes, and tools. The people help create an environment of psychological safety where people are free to ask questions, experiment, make mistakes, and be creative. Think about what is most important to your organization, and invest in the right tools to support those goals and the people working towards them.
This month we look at ways to balance your workload, gracefully say "no," and avoid burnout.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/devops-burnout-solution
作者:[Dawn Parzych][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dawnparzych
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_incentives.png?itok=IhIL1xyT (A person choosing between several different incentives)
[2]: https://opensource.com/article/19/11/burnout-open-source-communities
[3]: https://www.pagerduty.com/resources/reports/unplanned-work/
[4]: https://services.google.com/fh/files/misc/state-of-devops-2019.pdf
[5]: https://opensource.com/tags/devops
[6]: https://laurieontech.com/
[7]: https://martinfowler.com/articles/feature-toggles.html

View File

@ -1,52 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Industrial Internet Consortium teams up with blockchain-focused security group)
[#]: via: (https://www.networkworld.com/article/3512062/industrial-internet-consortium-teams-up-with-blockchain-focused-security-group.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Industrial Internet Consortium teams up with blockchain-focused security group
======
A merger between a prominent industrial-IoT umbrella group and a blockchain-centered corporate memebership program highlights a new focus on bringing finished IoT solutions to market.
Leo Wolfert / Getty Images
The Industrial Internet Consortium and the Trusted IoT Alliance announced today that they would merge memberships, in an effort to drive more collaborative approaches to [industrial IoT][1] and help create more market-ready products.
The Trusted IoT Alliance will now operate under the aegis of the IIC, a long-standing umbrella group for vendors operating in the IIoT market. The idea is to help create more standardized approaches to common use cases in IIoT, enabling companies to get solutions to market more quickly.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“This consolidation will strengthen the ability of the IIC to provide guidance and advance best practices on the uses of distributed-ledger technology across industries, and boost the commercialization of these products and services,” said 451 Research senior [blockchain][3] and DLT analyst Csilla Zsigri in a statement.
Gartner vice president and analyst Al Velosa said that its possible the move to team up with TIoTA was driven in part by a new urgency to reach potential customers. Where other players in the IoT marketplace, like the major cloud vendors, have raked in billions of dollars in revenue, the IIoT vendors themselves havent been as quick to hit their sales targets. “This approach is them trying to explore new vectors for revenue that they havent before,” Velosa said in an interview.
The IIC, whose founding members include Cisco, IBM, Intel, AT&amp;T and GE, features 19 different working groups, covering everything from IIoT technology itself to security to marketing to strategy. Adding TIoTAs blockchain focus to the mix could help answer questions about security, which are centrally important to the continued success of enterprise and industrial IoT products.
Indeed, research from Gartner released late last year shows that IoT users are already gravitating toward blockchain and other distributed-ledger technologies. Fully three-quarters of IoT technology adopters in the U.S. have either brought that type of technology into their stack already or are planning to do so by the end of 2020. While almost two-thirds of respondents to the survey cited security and trust as the biggest drivers of their embrace of blockchain, almost as many noted that the technology had allowed them to increase business efficiency and lower costs.
**[ Also see [What is edge computing?][4] and [How edge networking and IoT will reshape data centers][5].]**
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3512062/industrial-internet-consortium-teams-up-with-blockchain-focused-security-group.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3243928/what-is-the-industrial-internet-of-things-essentials-of-iiot.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html
[4]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[5]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,116 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Ways NOT to manage your remote team)
[#]: via: (https://opensource.com/article/20/1/ways-not-manage-remote-team)
[#]: author: (Matt Shealy https://opensource.com/users/mshealy)
7 Ways NOT to manage your remote team
======
Learn to let go and let your team shine with these simple steps.
![World locations with red dots with a sun burst background][1]
Building a remote development team presents unique challenges. Trying to build a cross-functional team, full of various personalities, virtually can lead to communication disasters. Fortunately, through planning, smart hiring, training, and communication, project leaders can build and lead virtual development teams successfully.
The demand for remote teams continues to grow. The increased demand for software developers and [new communication technology][2] has removed the barriers of geography. Even with these advancements, disparate team members from different backgrounds may find it challenging to learn how to interact with one another.
It's easy for misunderstandings and miscommunications to occur. It's becoming more and more critical to [rethink collaboration][3] in a work environment growing increasingly more remote. As a result, project leaders must rethink the needs of teams spread out across the globe.
By avoiding a few key pitfalls, your team can overcome these challenges consistently. If you follow a few time-tested, in-house practices, and some that apply specifically to remote teams, you can manage a range of personalities and complete your build successfully.
The following are seven practices to avoid when managing a remote team.
### Don't slack on training
A variety of different perspectives is the top benefit of working with diverse remote team members. However, its essential to understand and acknowledge those differences during training.
Addressing language differences is only one part of acknowledging team diversity. When recruiting programmers, it would be best if you focus on candidates with an aptitude for working in a multicultural environment and even in a multi-coding language environment.
Also, dont make the mistake of thinking that personal characteristics are unimportant because team members arent working face-to-face. Team members still have to collaborate, so their personalities must mesh.
Training isnt all technical skills. Emotional training can also help a team work well together remotely. Emotional intelligence [training can help coworkers develop skills like empathy and awareness][4] that can make teams work better together. Emotional distance can make it challenging to establish bonds between new trainees and team leaders from the get-go, which can immediately loosen bonds in what could be a strong remote team. Consider what remote team-building training you can do via video or on Slack. Remember that is it important to constantly be proactive in strengthening relationships throughout the life of your team.
### Don't use an ad hoc communication system
When working with diverse remote team members, its essential that you use straightforward, effective code management and communication software. Ideally, you want the most uncomplicated resources available.
The process may need to be further simplified for newer team members and freelancers who do not have the time to learn everything about the ins and outs of the organizations policies.
Create standard ways to communicate with team members. Maybe all work discussion happens in Slack or one of its [open source alternatives][5], while teams use [project management software][6] to keep work on schedule.
Having a clear space for each type of conversation will help those who need to focus on work, while also offering an outlet for fun. Team members must use these resources daily, if not hourly. If your solutions are too complicated, learning to navigate the tools will pull focus from design and implementation work.
### Don't lose sight of the big picture
Avoid the pitfalls of focusing too closely on daily goals. It is essential you stay focused on the overall project. You should work with team members to establish your goals and milestones, and make sure team members stay on schedule. As the project leader, its your job to make sure these key events contribute to deliverable milestones.
### Don't micromanage your team
Some project managers, especially those with a coding background, find it difficult to delegate responsibility. Its natural to gravitate toward solving familiar problems. However, its your job to guide the ship, not develop solutions.
In [a micromanaged environment][7], the project manager tells the programmers what the code is, and exactly how to craft it. However, this management style ultimately leads to employee dissatisfaction.
Your team should feel free to explore solutions, initiate positive change, and use innovation for exciting new ideas.
If you dont give your coders space to innovate and use their creativity, they feel undervalued. If this sentiment persists, your remote staff members are unlikely to produce the best work possible.
### Use this opportunity to promote diversity
If you are going to build a remote team, you must understand and acknowledge that [team members will have different backgrounds][8]. This circumstance is especially beneficial. The diverse viewpoints of staff members will enable your team to offer insights that expand beyond that of a centrally located talent pool.
Also, your diverse remote team will give you access to experience with global trends. Furthermore, your team will be less likely to suffer from the effect of crowd mentality thinking.
With the freedom to innovate, team members will feel more comfortable offering their input. Together, these benefits will enable your team as a whole to build a product better suited for multiple regions and verticals.
### Don't forget to keep an eye on costs
Ballooning costs are a top challenge for development teams. Despite project planning best practices, scope creep is a real problem for even the most experienced teams. There are two underlying factors that need to be addressed if this problem is to be solved.
The first is the fact that the more analysis that is done throughout the development process, the more complexity arises and is ultimately added to the system. The second factor is the fact that people who have been through system development before know that there wont ever be a "second phase" of the project. They know that there will only be one shot at the project, so they try and fit everything possible into the initial project phase.
These two self-reinforcing factors lead to a death spiral of problems. Too much analysis leads to system complexity and loads of features being crammed into the first phase. A lack of trust between IT and business teams inevitably forms because of this. The design requirements become too big and complicated for there to be any chance of staying on schedule or on budget. Inevitably, blame lands with the IT team.
The answer to this problem is to restrict analysis to only what the business team needs right now. IT should refrain from speculating on what may be needed in the future or asking business team members what they may need down the line.
These requirements allow IT to build a reliable project plan and overall budget. If your team is looking to outsource the project at least in part, [calculating the app development costs][9] for each individual team or project component can help to keep things on track.
### Don't think of time zone differences as a barrier
Do not view time zone differences as a challenge. You can leverage time zones to your advantage and build your team in a way that will keep the project running around the clock.
What is more important is choosing candidates who work independently. Good remote coders are responsible, organize their time effectively, and communicate well with team members. With an effective communication system, time differences will have no effect on the successful outcome of your team.
Remote team members benefit significantly from predictable and straightforward engagement. The relatively new remote work environment demands that staff members establish a new standard for clear, concise communication. To promote a collaborative environment, teams must establish norms, such as universal jargon and consensus on communication tools.
In an environment that thrives on innovation, predictability is an asset when it comes to teamwork. Everyone is different, and innovation is a sought-after commodity in software development. However, in the remote environment, consistent behavior helps team members communicate effectively.
### Conclusion
Remote work is quickly becoming the new default for software development. Be sure to avoid these pitfalls to set your teams up for success.
Do you have any tips to recommend? Please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/ways-not-manage-remote-team
作者:[Matt Shealy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mshealy
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_remote_teams.png?itok=Wk1yBFv6 (World locations with red dots with a sun burst background)
[2]: https://www.chamberofcommerce.com/business-advice/strategies-and-tools-for-remote-team-collaboration
[3]: https://hbr.org/2018/02/how-to-collaborate-effectively-if-your-team-is-remote
[4]: https://www.skillsoft.com/content-solutions/business-skills-training/emotional-intelligence-training/
[5]: https://opensource.com/alternatives/slack
[6]: https://opensource.com/business/16/2/top-issue-support-and-bug-tracking-tools
[7]: https://blog.trello.com/how-to-stop-micromanaging-your-remote-team
[8]: https://opensource.com/article/18/10/think-global-communication-challenges
[9]: https://www.appdevelopmentcost.com/#the-definitive-guide-to-understanding-app-development-costs

View File

@ -1,56 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Researchers aim for transistors that compute and store in one component)
[#]: via: (https://www.networkworld.com/article/3510638/researchers-aim-to-build-transistors-that-can-compute-and-store-information-in-one-component.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Researchers aim for transistors that compute and store in one component
======
Materials incompatibilities have stalled efforts to integrate transistors and memory in a single on-chip, commercial component. That might be about to change.
iStock
Researchers at Purdue University have made progress towards an elusive goal: building a transistor that can both process and store information. In the future, a single on-chip component could integrate the processing functions of transistors with the storage capabilities of ferroelectric RAM, potentially creating a process-memory combo that enables faster computing and is just atoms thick.
The ability to cram more functions onto a chip, allowing for greater speed and power without increasing the footprint, is a core goal of electronics design. To get where they are today, engineers at Purdue had to overcome incompatibilities between transistors the switching and amplification mechanisms used in almost all electronics and ferroelectric RAM. Ferroelectric RAM is higher-performing memory technology; the material introduces non-volatility, which means it retains information when power is lost, unlike traditional dielectric-layer-constructed DRAM.
**SEE ALSO:** [Researchers experiment with glass-based storage that doesn't require electronics cooling][1]
In the past, materials conflicts have hampered the design of commercial electronics that integrate transistors and memory. “Researchers have been trying for decades to integrate the two, but issues happen at the interface between a ferroelectric material and silicon, the semiconductor material that makes up transistors. Instead, ferroelectric RAM operates as a separate unit on-chip, limiting its potential to make computing much more efficient,” Purdue explains in a [statement][2].
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
A team of engineers at Purdue, led by Peide Ye, came up with a solution: “We used a semiconductor that has ferroelectric properties. This way two materials become one material, and you dont have to worry about the interface issues,” said Ye, who is a professor of electrical and computer engineering at the university.
The Purdue engineers method revolves around a material called alpha indium selenide. It has ferroelectric properties, but it overcomes a limitation of conventional ferroelectric material, which generally acts as an insulator and doesnt allow electricity to pass through. The alpha indium selenide material can become a semiconductor, which is necessary for the transistor element, and a room-temperature-stable, low voltage-requiring ferroelectric component, which is needed for the ferroelectric RAM.
Advertisement
Alpha indium selenide has a smaller band gap than other materials, the university explains. Band gaps are where no electrons can exist. That shrunken band gap, found in the material natively, means that material isnt a serious insulator and isnt too thick for electrical current to pass through — yet there is still a ferroelectric layer. The smaller band gap “[makes] it possible for the material to be a semiconductor without losing ferroelectric properties,” according to Purdue.
“The result is a so-called ferroelectric semiconductor field-effect transistor, built in the same way as transistors currently used on computer chips.”
More information is available [here][2].
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3510638/researchers-aim-to-build-transistors-that-can-compute-and-store-information-in-one-component.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3488556/researchers-experiment-with-glass-based-storage-that-doesnt-require-electronics-cooling.html
[2]: https://www.purdue.edu/newsroom/releases/2019/Q4/reorganizing-a-computer-chip-transistors-can-now-both-process-and-store-information.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,79 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Wi-Fi 6 is slowly gathering steam)
[#]: via: (https://www.networkworld.com/article/3512153/wi-fi-6-will-slowly-gather-steam-in-2020.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Wi-Fi 6 is slowly gathering steam
======
Theres a lot to look forward to about 802.11ax, aka Wi-Fi 6, just dont expect it to be a top-to-bottom revolution in 2020.
Thinkstock
The next big wave of Wi-Fi technology, 802.11ax, is going to become more commonplace in enterprise installations over the course of the coming year, just as the marketing teams for the makers of Wi-Fi equivalent will have you believe. Yet the rosiest predictions of revolutionary change in what enterprise Wi-Fi is capable of are still a bit farther off than 2020, according to industry experts.
The crux of the matter is that, while access points with 802.11axs Wi-Fi 6 branding will steadily move into enterprise deployments in, the broader Wi-Fi ecosystem will not be dominated by the new standard for several years, according to Farpoint Group principal Craig Mathias.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1] [][2]
“Keep in mind, weve got lots and lots of people that are still in the middle of deploying [802.11]ac,” he said, referring to the previous top-end Wi-Fi standard. The deployment of 802.11ax will tend to follow the same pattern as the deployment of 802.11ac and, indeed, most [previous new Wi-Fi standards][3]. The most common scenario will be businesses waiting for a refresh cycle, testing the new technology and then rolling it out.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
In the near term, enterprises installing 802.11ax access points will bring performance increases the systems [MU-MIMO][5] antenna technology is more advanced than that present in previous versions of the Wi-Fi standard, and better suited to high-density environments with large numbers of endpoints connecting at the same time. Yet those increases will be small compared to those that will ensue once 802.11ax endpoints that is, phones, tablets, computers and more specialized devices like [IoT][6] sensors and medical devices hit the market.
That, unfortunately, is still some way off, and Mathias said it will take around five years for 802.11ax to become ubiquitous.
Advertisement
“Were not expecting a lot of [802.11]ax devices for a while,” he said.
Making sure devices are compliant with modern Wi-Fi standards will be crucial in the future, though it shouldnt be a serious issue that requires a lot of device replacement outside of fields that are using some of the aforementioned specialized endpoints, like medicine. Healthcare, heavy industry and the utility sector all have much longer-than-average expected device lifespans, which means that some may still be on 802.11ac.
Thats bad, both in terms of security and throughput, but according to Shrihari Pandit, CEO of Stealth Communications, a fiber ISP based in New York, 802.11ax access points could still prove an advantage in those settings thanks to the technology that underpins them.
“Wi-Fi 6 devices have eight radios inside them,” he said. “MIMO and beamforming will still mean a performance upgrade, since theyll handle multiple connections more smoothly.”
A critical point is that some connected devices on even older 802.11 versions n, g, and even b in some cases wont be able to benefit from the numerous technological upsides of the new standard. Making sure that a given network is completely cross-compatible will be a central issue for IT staff looking to realize performance gains on networks that service legacy gear.”
Pandit said that, increasingly, data-hungry customers like tech companies are looking to 802.11ax to act as a wireline replacement for those settings. “Lots of the tech companies we service here, some of them want Wi-Fi 6 so that they can use gigabit performance without having to run wires,” he said.
Whether it goes by Wi-Fi 6 or 802.11ax, the next generation of Wi-Fi technology is likely to be marketed a bit differently than new Wi-Fi standards have been in the past, according to Mathias. Its less about the mere fact that theres a new Wi-Fi standard providing faster connectivity, and more about enabling new functionality that 802.11ax makes possible, including better handling of IoT devices and integration with AI/machine learning systems.
Luckily, prices for top-end Wi-Fi equipment shouldnt change much compared to the current top of the line, making it easy for almost any organization to budget for the switch.
“Were not expecting anyone to pay a premium for [802.11ax],” said Mathias.
**Now see ["How to determine if Wi-Fi 6 is right for you"][2]**
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3512153/wi-fi-6-will-slowly-gather-steam-in-2020.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[3]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/3256905/13-things-you-need-to-know-about-mu-mimo-wi-fi.html
[6]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,86 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Deliver Affordable and Optimized Application Access Worldwide with SASE)
[#]: via: (https://www.networkworld.com/article/3512640/how-to-deliver-affordable-and-optimized-application-access-worldwide-with-sase.html)
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
How to Deliver Affordable and Optimized Application Access Worldwide with SASE
======
Gartner tells you to use your MPLS renewal budget to transition into SASE, but not every SASE can replace MPLS. Here's what to look for.
Vit_Mar
Global expansion is a common goal for many enterprises. In some verticals, like manufacturing, running production lines globally is an established practice. However, deploying international sales, service, and engineering teams is becoming the norm for many other sectors including high tech, finance, retail, and more.
A global enterprise footprint creates a unique set of challenges that do not occur in regional businesses. Users in a remote office will need to securely access data-center applications, cloud applications, or both. Depending on the distance between the remote location and the application—and the sensitivity of the application to high latency, packet loss, and jitter—an expensive set of technologies and capabilities will be needed to optimize the user experience.
[SD-WAN][1] focuses on affordable, high-performance site connectivity. Alone it cannot solve the broader networking and security challenges faced by global enterprises, which is why Gartner and other analysts are already recognizing the need to look beyond SD-WAN for a new class of enterprise solutions. Gartner has coined the term [secure access service edge (SASE, pronounced “sassy”)][2] for solutions that converge SD-WAN capabilities with enterprise security into a global, cloud-native platform. Lets take a deeper look.
#### **The Application Access Optimization Challenge**
Across the enterprise, IT finds itself facing various challenges delivering network access to users and data everywhere. While those challenges will vary, their impact point remains the same—the user experience and IT budget.
For data-center access, organizations traditionally relied on global MPLS providers. The predictability of MPLS ensured consistent latency and low packet loss and jitter needed to support critical applications like voice and ERP. The challenge with global MPLS was the cost per megabit that required organizations to spend heavily on limited bandwidth, creating a capacity constraint. The introduction of SD-WAN appliances and Internet-based connectivity does little to address the global connectivity challenge because SD-WAN appliances can't control the packet routing once the packet is placed on the Internet-leg of the SD-WAN.
Another option to address global connectivity challenges was to shorten the distance between users and applications. Enterprises built regional data centers or hubs to get applications closer to end users. This is a very costly and complex endeavor that is most suitable for very large organizations with distributed IT staff who can optimize application performance and availability.
#### **Global Cloud Access**
The migration to cloud applications and cloud data centers created a new challenge for remote users. While MPLS was optimized for the organizations on-premises data-center access, cloud data centers often reside in different geographic locations. Special connectivity solutions, such as [AWS DirectConnect and Azure ExpressRoute][3], are used to optimally connect physical enterprise locations to the cloud data centers. And while SD-WAN appliances claim cloud optimization, they require deploying a second appliance into the cloud — no easy task.
Regardless of application location, none of the network solutions discussed are extensible to home offices and mobile users, where deploying edge appliances for SD-WAN or WAN optimization is not possible. This creates an application access challenge because the users must use the public internet to access the edge of the data center hosting their application. This access is subject to the unpredictable quality of the network from the users location to the destination.
#### **SASE Delivers Optimized and Secured Application Access Anywhere**
Global expansion, the migration from on-premises to cloud data centers, and the emergence of the mobile and telecommuting workforce are straining legacy network architectures. The network “patches” created to address this challenge, such as edge-SD-WAN, hybrid MPLS, Internet transports, and premium cloud connectivity, are costly and incomplete.
To address this architectural challenge, a new architecture that connects and optimizes all edges—physical, virtual, cloud, mobile—anywhere in the world, must be created. Thats the story of [SASE][2]. SASE services converge networking and security into an identity-aware, cloud-native software stack. Its the convergence that is key. Without the necessary network optimizations and capabilities, the SASE platform will not be able to meet performance expectations everywhere.
#### **Cloud-Native: Built for and Delivered from the Cloud**
A core characteristic of SASE is a cloud-native, as-a-service model. A cloud-native architecture leverages key cloud capabilities, including elasticity, adaptability, self-healing, and self-maintenance.
SASE calls for the creation of a network of cloud points of presence (PoPs), which comprise the SASE Cloud. The PoPs run the provider software that delivers a wide range of networking and network security capabilities as a service. The PoPs should seamlessly scale to adapt to changes in traffic load via the addition of compute nodes. The PoPs software can be upgraded to deliver new features or bug fixes seamlessly and without IT involvement. The cloud architecture must include self-healing capabilities to automatically move processing away from failing compute nodes and PoPs and into healthy ones.
These capabilities can't be achieved by spinning up virtual appliances in the cloud. Appliances are designed to serve a single customer (single tenant) and lack the overall cloud orchestration layer to ensure elasticity and self-healing.
**Globally Distributed: Available Near All Edges**
SASE Cloud is implemented as a globally distributed cloud platform. The SASE Cloud design guarantees that wherever your edges are, the full range of networking and security capabilities will be available to support them. SASE providers will have to strategically deploy PoPs to support business locations, cloud applications, and mobile users. As Gartner notes, SASE PoPs must extend beyond public cloud providers footprints (like AWS and Azure) to deliver a low-latency service to enterprise edges.
Building a global cloud platform requires providers to hone their ability to rapidly deploy PoPs into cloud and physical data centers, ensure high capacity and redundant connectivity to support both WAN and cloud access, and apply security and optimization end-to-end across all edges.
#### **Thin Edge: DC, Branch, Cloud, User**
By placing processing and business logic in the cloud, SASE has minimal requirements for connecting various edges. This is a key challenge for SD-WAN edges especially in the context of NFV and uCPE. Running SD-WAN and network security side by side on the same appliance increases the likelihood of an overload, forcing the need to over-spec the underlying appliance.  This isn't a theoretical issue: An increase in branch throughput or rise in encrypted traffic volume can force an out-of-budget expansion. A Thin Edge approach has the following benefits:** **
* **Low cost:** By minimizing edge processing, low-cost appliances can achieve high throughput as most resource-intensive processing, such as deep packet inspection, is done using cloud resources that can scale better.
* **Low maintenance:** By keeping the over-functionality limited, it is possible to run a slower upgrade cycle to the edges, which has a higher potential for disruption vs. introducing new capabilities in the cloud.
* **Low impact:** Cloud integration is achieved with no edge appliances at all (agentless), while security and global network optimization remains intact. Mobile devices and new kinds of IoT devices no longer need significant processing resources to participate in the corporate network. They can automatically connect to the nearest SASE PoP with minimal battery impact.
#### **End-to-End Optimization**
Combining intelligent routing at the WAN edge with a software-defined global private backbone enables end-to-end traffic optimization. Last-mile optimizations focus on addressing last-mile issues, such as packet loss, by dynamically routing traffic over multiple ISPs. Middle-mile optimizations focus on optimizing routing globally and over multiple carriers comprising a diverse underlay. The middle-mile optimization extends to all edges—physical, virtual, and mobile—which is a unique benefit to a cloud-based, rather than an edge appliance-based, architecture.
In short, SASE implements a new architecture that is built to support the modern global enterprise and address the various resources, requirements, and use cases in a holistic platform. Yes, SASE provides a fresh way to secure the network, but SASE also needs the “networking capabilities” of the network if companies are to deliver users everywhere an optimum user experience.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3512640/how-to-deliver-affordable-and-optimized-application-access-worldwide-with-sase.html
作者:[Cato Networks][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.catonetworks.com/sd-wan?utm_source=idg
[2]: https://www.catonetworks.com/sase?utm_source=idg
[3]: https://www.catonetworks.com/cato-cloud#cloud-datacenter

View File

@ -1,65 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What communities of practice can do for your organization)
[#]: via: (https://opensource.com/open-organization/20/1/why-build-community-of-practice)
[#]: author: (Tracy Buckner https://opensource.com/users/tracyb)
What communities of practice can do for your organization
======
In open organizations, fostering passionate communities can increase
collaboration, accelerate problem solving, and lead to greater
innovation.
![Lots of people in a crowd.][1]
As I discussed in the [first part of this series][2], community is a fundamental principle in open organizations. In open organizations, people often define their roles, responsibilities, and affiliations through shared interests and passions—[not title, role, or position on an organizational chart][3].
So fostering and supporting communities of practice (CoPs) can be a great strategy for building a more open organization. Members of communities of practice interact regularly, discuss various topics, and solve problems collectively and collaboratively. These communities provide members with an opportunity to share their expertise, learn from others, and network with one another.
As Luis Gonçalves states in [Learning Organisation][4]:
> CoPs provide a shared context where members of the organisation can communicate and share information; stimulate learning through peer-to-peer mentoring, coaching, self-reflection, and collaboration; generate new knowledge; and initiate projects that develop tangible results.
But those aren't the only value such communities can offer. Communities of practice can also enhance an organization in the following ways (summarized in Figure 1).
**Decreased learning curves**. Many organizations face the challenge of rapidly increasing productivity of new employees (or members). This task becomes especially challenging in organizations where an employee's manager may be located in a different state, country, or region. Communities of practice offer new employees a network of connections they can tap into more quickly and easily. By joining a CoP, they can immediately access a network of experts and share resources, ask questions, and seek guidance outside the formal lines of the organizational chart.
Members of communities of practice interact regularly, discuss various topics, and solve problems collectively and collaboratively.
**Increased collaboration.** A recent survey from My Customer.com [shows][5] that 40 percent of company employees report not feeling adequately supported by their colleagues—because "different departments have their own agendas." A lack of collaboration between departments limits innovation and increases opportunities for miscommunication. Communities of practice encourage members from _all_ roles across _all_ departments to unite in sharing their expertise. This increases collaboration and reduces the threat of organizational silos.
**Rapid problem-solving.** Communities of practice provide a centralized location for communication and resources useful for solving organizational or business problems. Enabling people to come together—regardless of their organizational reporting structure, location, and/or management structure—encourages problem-solving and can lead to faster resolution of those problems.
**Enhanced innovation.** Researchers Pouwels and Koster [recently argued][6] that “collaboration contributes to innovation." CoPs provide a unique opportunity for members to collaborate on topics within their shared domains of interest and passion. This passion ignites a desire to discover new and innovative ways to solve problems and create new ideas.
![Benefits of Communities of Practice][7]
_Figure 1: Benefits of communities of practice. Courtesy of Tracy Buckner. CC BY-SA._
[Étienne Wenger][8], an educational theorist and proponent of communities of practice, said that learning doesn't only occur through a master; it also happens among apprentices. Communities of practice foster learning by connecting apprentices, while encouraging collaboration and offering an opportunity for creative problem-solving and innovation.
In the final article of this series, I'll explain how you can reap these benefits by creating your own community of practice.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/1/why-build-community-of-practice
作者:[Tracy Buckner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tracyb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY (Lots of people in a crowd.)
[2]: https://opensource.com/open-organization/19/11/what-is-community-practice
[3]: https://opensource.com/open-organization/resources/open-org-definition
[4]: https://www.organisationalmastery.com/category/learning-organisation/
[5]: https://www.mycustomer.com/experience/engagement/the-stats-that-prove-silos-are-your-biggest-cx-challenge
[6]: https://www.researchgate.net/profile/Ferry_Koster/publication/313659568_Inter-organizational_cooperation_and_organizational_innovativeness_A_comparative_study/links/59e64d510f7e9b13aca3c224/Inter-organizational-cooperation-and-organizational-innovativeness-A-comparative-study.pdf
[7]: https://opensource.com/sites/default/files/images/open-org/cop_benefits.png (Benefits of Communities of Practice)
[8]: https://en.wikipedia.org/wiki/%C3%89tienne_Wenger

View File

@ -1,68 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Instant, secure teleportation of data in the works)
[#]: via: (https://www.networkworld.com/article/3512037/instant-secure-teleportation-of-data-in-the-works.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Instant, secure teleportation of data in the works
======
Quantum teleportation, where information is sent instantaneously, will secure the Internet, researchers say. Scientists are making progress.
Thinkstock
Sending information instantly between two computer chips using quantum teleportation has been accomplished reliably for the first time, according to scientists from the University of Bristol, in collaboration with the Technical University of Denmark (DTU). Data was exchanged without any electrical or physical connection a transmission method that may influence the next generation of ultra-secure data networks.
Teleportation involves the moving of information instantaneously and securely. In the “Star Trek” series, fictional people move immediately from one place to another via teleportation. In the University of Bristol experiment, data is passed instantly via a single quantum state across two chips using light particles, or photons. Importantly, each of the two chips knows the characteristics of the other, because theyre entangled through quantum physics, meaning they therefore share a single physics-based state.
The researchers involved in these successful silicon tests said they built the photon-based silicon chips in a lab and then used them to encode the quantum information in single particles. It was “a high-quality entanglement link across two chips, where photons on either chip share a single quantum state,” said Dan Llewellyn of University of Bristol in a [press release][1].
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
“These chips are able to encode quantum information in light generated inside the circuits and can process the quantum information,” the school stated. It claims a successful quantum teleportation of 91%, which is considered high quality.
### Entanglement boosts data transmission
Entanglement links to be used in data transmission are where information is conjoined, or entangled, so that the start of a link has the same state as the end of a link. The particles, and thus data, are at the beginning of the link and at the end of the link at the same time.
The physics principle holds promise for data transmissions, in part because intrusion is easily seen; interference by a bad actor becomes obvious if the beginning state of the link and the end state are no longer the same. In other words, any change in one element means a change in the other, and that can be over distance, too. Additionally, the technique allows leaks to be stopped: Instant key destruction can occur at the actual moment of attempted interference.
“Particles can be in two places at the same time, and they can even be entangled with twin particles, so that they can feel everything that happens to each other,” explained Jonas Schou Neergaard-Nielsen, a senior researcher at DTU, [in a 2015 story][3] about the universitys earlier teleportation exploration. “At the sub-microscopic level, where quantum mechanics rule, you find a completely different logic to what we are used to in our macroscopic reality,” Schou Neergaard-Nielsen said back in 2015.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
### Quantum chips gain momentum
In the bigger picture, the area of quantum-based microprocessors is gaining momentum. It is thought, for example, that quantum-embedded chips could ultimately secure the internet of things. IoT security vendor Crypto Quantique has said that quantum chips could be made unclonable. Its solution uses a quantum method of [creating totally random keys][5] from the measurement of low currents on the silicon. Its related to how electrons can leak through transistor gates. “Unforgeable hardware trust anchors [are] generated by the device,” Crypto Quantique explains on its website. “Our technology offers true randomness.”
A secure quantum computing environment overall could have “profound impacts on modern society,” the University of Bristol said. And with the introduction of entangled physics states across networks, a highly secured “quantum internet could ultimately protect the worlds information from malicious attacks.”
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3512037/instant-secure-teleportation-of-data-in-the-works.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.bristol.ac.uk/news/2019/december/quantum-teleportation.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.dtu.dk/english/news/2015/01/dtu-physics-researchers-developing-tomorrows-teleportation?id=ece937e9-1402-437a-8f57-cc3124563bc8
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: mailto:https://www.networkworld.com/article/3333808/quantum-embedded-chips-could-secure-iot.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,160 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 core skills to level-up your tech career in 2020)
[#]: via: (https://opensource.com/article/20/1/core-skills-tech-career)
[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
4 core skills to level-up your tech career in 2020
======
There is one category of skill you should focus on to advance your
career this year. Here's how.
![Two hands holding a resume with computer, clock, and desk chair ][1]
We do a lot to level-up our careers. We learn new programming languages; we take on new projects at work; we work on side projects on the weekend; we contribute to open source communities. What if I were to tell you that, while these activities are helpful, there is one set of skills you should focus on if you truly want to advance your career.
These skills go by many names, such as soft skills, non-technical skills, leadership skills, and people skills. Calling them soft or non-technical skills leads to the mistaken idea that they are easy. For some, these skills are incredibly difficult. Calling them leadership skills makes people think they don't need to put effort into developing them if they don't want to be a leader.
I prefer to call them core or functional skills. These skills are part of the core of what makes us human, and it is hard to function without them.
When we have these skills, we can:
* Communicate with others
* Collaborate and work with others
* Solve problems
* Understand and share the feelings of others
* Use time effectively or productively
These skills are part of what makes us human, and everything in software is about humans. Humans are reviewing your code. Humans are installing or using the application you are writing.
If these skills are so essential, why do we often dismiss them in favor of more "technical" skills and give them a bad rap? This isn't about focusing on one versus the other. Both are important and relevant.
### Mindset and biases
Our biases and mindset contribute to the importance we place on functional skills. Bias isn't a bad thing. There are [175 biases][2] that exist to help our brains combat four problems:
* We have too much information to process.
* There's not enough meaning in the information we receive.
* There's not enough time to process information.
* We can't remember everything. Our memory is finite.
One bias that sneaks in when it comes to assessing our skills is the Dunning-Kruger effect. We mistakenly assess our abilities as more exceptional than they are, which comes from an inability to recognize our own faults. You may think you know what you are doing when you don't because of the Dunning-Kruger effect. You think you are managing your time effectively, but in reality, you are working nights and weekends to get caught up. Yes, the work is getting completed, but at what cost? Working non-stop can lead to burnout or a decline in health.
Another factor is your [mindset][3]. Do you have a growth mindset or a fixed mindset when it comes to learning a new skill? People with a fixed mindset believe that abilities are innate and can't be improved. Those with a growth mindset believe there is room for improvement, and you can improve any skill with practice.
These statements reflect a fixed mindset:
* "I'm just not a people person."
* "They're a natural-born leader."
You're not born with the ability to speak or write. You learn these skills over time. The same goes for the core and functional skills important for developers.
### Communication
> "Over time I've learned the biggest source of failure is often due to people and teams. A lack of communication and coordination can cause serious problems."
> — Laurie Barth, [_How architecture improved my coding skills_][4]
Communication includes verbal, non-verbal, and written communication. Failures along any of these can cause serious problems. Communication failures in tech look like:
* Delays in resolving incidents
* Redesigning a feature
* Scope creep
* Lack of meaningful comments on pull requests (PRs)—these can be especially impactful to junior engineers
* Unnecessary conflicts and lack of alignment
* Delivering above or below expectations
Cultural norms and personal preferences impact communication. Everybody has a preferred method of communicating, whether that is via email, face-to-face, Slack messages, phone, text, etc. Think about the cultural norms regarding communication in your workplace. Do they match your preferred method of communicating? If there is a mismatch, problems may arise.
If you're a manager, do you know your direct reports' preferred communication style, or do you just utilize your preferred method? If a direct report prefers email so they can have some time to think before responding and you prefer face-to-face communication, a quick Slack message asking them to come to your office can cause unnecessary anxiety.
### Empathy
> "Empathy is much harder than we think… But to build empathy we need to slow down."
> —Andrew Tenzer and Ian Murray, [_The empathy delusion_][5]
Empathy is one of the most important skills you can learn. Empathy is our ability to understand and share the feelings of others. We feel more connected to one another because of empathy.
There are four attributes of empathy:
* Seeing another person's perspective
* Being non-judgmental
* Recognizing emotions in other people
* Communicating your understanding of another person's feelings
Communication and empathy are closely tied. To communicate effectively, you need to listen. To listen effectively, you need empathy. Instead of listening to what a colleague is saying, we usually think about how to best respond. As the adage goes, you have two ears and one mouth for a reason; you should be listening twice as often as you speak.
When you show empathy towards another person, you are not minimizing their thoughts, feelings, or experiences. Instead of using comments like "it could be worse" or "at least…" use phrases like "that sounds horrible," "I can't imagine how you must feel," or "I'm here for you, how can I help?"
### Get ready to level-up
Improving on a skill takes time, but we often give up while we are learning once [impostor syndrome][6] kicks in. It takes years of practice to perfect something, and even with years of practice, you won't be perfect. You will make mistakes. When learning a new skill or improving on an existing skill, mastery—not perfection—should be the goal. Perfection implies there is no room for improvement; mastery indicates learning a skill is an ongoing journey, and there is room to improve.
You don't want to be a perfect communicator. You want to be a master communicator. Even the most experienced public speakers make mistakes. If you aim for perfection, you will be disappointed.
To improve a skill, you need to schedule time to practice. But not just any type of practice; to master a skill, use deliberate practice. Deliberate practice has five main components:
1. Create a specific goal.
2. Break the goal down into small steps.
3. Get feedback from a master.
4. Step out of your comfort zone.
5. Stay motivated.
To illustrate this process, say you have a goal of presenting at a conference, but you aren't a confident public speaker. What are some small steps you can take to practice?
* Identify and reduce the use of filler words like "um," "ah," or "you know."
* Make eye contact when speaking with others or let people know why you don't.
* Control mannerisms like fidgeting with your hands or nodding your head while talking.
* Incorporate appropriate pauses.
Feedback is a necessary part of practicing. Seek out others to provide you with feedback. A great way to step out of your comfort zone and get feedback from master public speakers is to sign up for Toastmasters. This can also help you stay motivated as you work through the speeches towards becoming a Distinguished Toastmaster.
You can apply this same process to any skill you are looking to master.
### Where to learn more
If you want to learn more about empathy and communication so you can level up your career, check out these resources:
* [Communicating with empathy][7] course by Sharon Steed
* [_Deliberate practice: Your pathway to growth and mastery_][8] by Habits at Work
* [_Want to be more empathetic? Avoid these 7 responses_][9] by Laura Click
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/core-skills-tech-career
作者:[Dawn Parzych][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dawnparzych
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
[2]: https://medium.com/better-humans/cognitive-bias-cheat-sheet-55a472476b18
[3]: https://www.penguinrandomhouse.com/books/44330/mindset-by-carol-s-dweck-phd/
[4]: https://dev.to/laurieontech/how-architecture-improved-my-coding-skills-21e
[5]: https://www.reachsolutions.co.uk/sites/default/files/2019-07/Reach%20Solutions%20The%20Empathy%20Delusion%20V2.pdf
[6]: https://en.wikipedia.org/wiki/Impostor_syndrome
[7]: https://www.lynda.com/Business-Skills-tutorials/Communicating-Empathy/534584-2.html
[8]: https://habitsatwork.com/blog/deliberate-practice
[9]: https://medium.com/@lauraclick/want-to-be-more-empathetic-avoid-these-7-responses-21bb52d5d2ad

View File

@ -1,73 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Google Cloud launches Archive cold storage service)
[#]: via: (https://www.networkworld.com/article/3513903/google-cloud-launches-archive-cold-storage-service.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Google Cloud launches Archive cold storage service
======
Archive will focus on long-term data retention and compete against AWS Glacier, Microsoft Cool Blob Storage, and IBM Cloud Storage.
Google
Google Cloud announced the general availability of Archive, a long-term data retention service intended as an alternative to on-premises tape backup.
Google pitches it as cold storage, meaning it is for data which is accessed less than once a year and has been stored for many years. Cold storage data is usually consigned to tape backup, which remains a [surprisingly successful][1] market despite repeated predictions of its demise.
Of course, Google's competition has their own products. Amazon Web Services has Glacier, Microsoft has Cool Blob Storage, and IBM has Cloud Storage. Google also offers its own Coldline and Nearline cloud storage offerings; Coldline is designed for data a business expects to touch less than once a quarter, while Nearline is aimed at data that requires access less than once a month.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
With Archive, Google highlights a few differentiators from the competition and its own archival offerings. First, Google promises no delay on data retrieval, claiming millisecond latency. AWS can take minutes or hours. While Archive costs a little more than AWS and Azure  $1.23 per terabyte per month vs. $1 per terabyte per month for AWS and Azure thats due in part to the longer remit for an early deletion charge. Google offers 365 days compared with 180 days for AWS and Azure.
"Having flexible storage options allows you to optimize your total cost of ownership while meeting your business needs," [wrote][3] Geoffrey Noer, Google Cloud storage product manager in a blog post announcing the services availability. "At Google Cloud, we think that you should have a range of straightforward storage options that allow you to more securely and reliably access your data when and where you need it, without performance bottlenecks or delays to your users."
Archive is a store-and-forget service, where you keep stuff only because you have to. Tape replacement and archiving data under regulatory retention requirements are two of the most common use cases, according to Google. Other examples include long-term backups and original master copies of videos and images.
The Archive class can also be combined with [Bucket Lock][4], Google Clouds data locking mechanism to prevent data from being modified, which is available to enterprises for meeting various data retention laws, according to Noer.
* [Backup vs. archive: Why its important to know the difference][5]
* [How to pick an off-site data-backup method][6]
* [Tape vs. disk storage: Why isnt tape dead yet?][7]
* [The correct levels of backup save time, bandwidth, space][8]
The Archive class can be set up in dual-regions or multi-regions for geo-redundancy and offers checksum verification durability of "11 nines" 99.999999999 percent.
More information can be found [here][9].
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3513903/google-cloud-launches-archive-cold-storage-service.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3263452/theres-still-a-lot-of-life-left-in-tape-backup.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://cloud.google.com/blog/products/storage-data-transfer/archive-storage-class-for-coldest-data-now-available
[4]: https://cloud.google.com/storage/docs/bucket-lock
[5]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
[6]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[7]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
[8]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
[9]: https://cloud.google.com/blog/products/storage-data-transfer/whats-cooler-than-being-cool-ice-cold-archive-storage
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,174 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Insights into Why Hyperbola GNU/Linux is Turning into Hyperbola BSD)
[#]: via: (https://itsfoss.com/hyperbola-linux-bsd/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Insights into Why Hyperbola GNU/Linux is Turning into Hyperbola BSD
======
In late December 2019, [Hyperbola][1] [announced][2] that they would be making major changes to their project. They have decided to drop the Linux kernel in favor of forking the OpenBSD kernel. This announcement only came months after Project Trident [announced][3] that they were going in the opposite direction (from BSD to Linux).
Hyperbola also plans to replace all software that is not GPL v3 compliant with new versions that are.
To get more insight into the future of their new project, I interviewed **Andre, co-founder of Hyperbola**.
### Why Hyperbola GNU/Linux Turned into Hyperbola BSD
![][4]
_**Its FOSS: In your announcement, you state that the Linux kernel is “rapidly proceeding down an unstable path”. Could you explain what you mean by that?**_
**Andre:** First of all, its including the adaption of DRM features such as [HDCP][5] (High-bandwidth Digital Content Protection). Currently there is an option to disable it at build time, however there isnt a policy that guarantees us that it will be optional forever.
Historically, some features began as optional ones until they reached total functionality. Then they became forced and difficult to patch out. Even if this does not happen in the case of HDCP, we remain cautious about such implementations.
Another of the reasons is that the _**Linux kernel is no longer getting proper hardening**_. [Grsecurity][6] stopped offering public patches several years ago, and we depended on that for our systems security. Although we could use their patches still for a very expensive subscription, the subscription would be terminated if we chose to make those patches public.
Such restrictions goes against the FSDG principles that require us to provide full source code, deblobbed, and unrestricted, to our users.
KSPP is a project that was intended to upstream Grsec into the kernel, but thus far it has not come close to reaching Grsec / PaX level of kernel hardening. There also has not been many recent developments, which leads us to believe it is now an inactive project for the most part.
Lastly, the interest in [allowing Rust modules][7] into the kernel are a problem for us, due to Rust trademark restrictions which prevent us from applying patches in our distribution without express permission. We patch to remove non-free software, unlicensed files, and enhancements to user-privacy anywhere it is applicable. We also expect our users to be able to re-use our code without any additional restrictions or permission required.
This is also in part why we use UXP, a fully free browser engine and application toolkit without Rust, for our mail and browser applications.
Due to these restrictions, and the concern that it may at some point become a forced build-time dependency for the kernel we needed another option.
_**Its FOSS: You also said in the announcement that you would be forking the OpenBSD kernel. Why did you pick the OpenBSD kennel over the FreeBSD, the DragonflyBSD kernel or the MidnightBSD kernel?**_
**Andre:** [OpenBSD][8] was chosen as our base for hard-forking because its a system that has always had quality code and security in mind.
Some of their ideas which greatly interested us were new system calls, including pledge and unveil which adds additional hardening to userspace and the removal of the systrace system policy-enforcement tool.
They also are known for [Xenocara][9] and [LibreSSL][10], both of which we had already been using after porting them to [GNU/Linux-libre][11]. We found them to be well written and generally more stable than Xorg/OpenSSL respectively.
None of the other BSD implementations we are aware of have that level of security. We also were aware [LibertyBSD][12] has been working on liberating the OpenBSD kernel, which allowed us to use their patches to begin the initial development.
_**Its FOSS: Why fork the kernel in the first place? How will you keep the new kernel up-to-date with newer hardware support?**_
**Andre:** The kernel is one of the most important parts of any operating system, and we felt it is critical to start on a firm foundation moving forward.
For the first version we plan to keep in synchronization with OpenBSD where it is possible. In future versions we may adapt code from other BSDs and even the Linux kernel where needed to keep up with hardware support and features.
We are working in coordination with [Libreware Group][13] (our representative for business activities) and have plans to open our foundation soon.
This will help to sustain development, hire future developers and encourage new enthusiasts for newer hardware support and code. We know that deblobbing isnt enough because its a mitigation, not a solution for us. So, for that reason, we need to improve our structure and go to the next stage of development for our projects.
_**Its FOSS: You state that you plan to replace the parts of the OpenBSD kernel and userspace that are not GPL compatible or non-free with those that are. What percentage of the code falls into the non-GPL zone?**_
**Andre:** Its around 20% in the OpenBSD kernel and userspace.
Mostly, the non-GPL compatible licensed parts are under the Original BSD license, sometimes called the “4-clause BSD license” that contains a serious flaw: the “obnoxious BSD advertising clause”. It isnt fatal, but it does cause practical problems for us because it generates incompatibility with our code and future development under GPLv3 and LGPLv3.
The non-free files in OpenBSD include files without an appropriate license header, or without a license in the folder containing a particular component.
If those files dont contain a license to give users the four essential freedoms or if it has not been explicitly added in the public domain, it isnt free software. Some developers think that code without a license is automatically in the public domain. That isnt true under todays copyright law; rather, all copyrightable works are copyrighted by default.
The non-free firmware blobs in OpenBSD include various hardware firmwares. These firmware blobs occur in Linux kernel also and have been manually removed by the Linux-libre project for years following each new kernel release.
They are typically in the form of a hex encoded binary and are provided to kernel developers without source in order to provide support for proprietary-designed hardware. These blobs may contain vulnerabilities or backdoors in addition to violating your freedom, but no one would know since the source code is not available for them. They must be removed to respect user freedom.
_**Its FOSS: I was talking with someone about HyperbolaBSD and they mentioned [HardenedBSD][14]. Have you considered HardenedBSD?**_
**Andre:** We had looked into HardenedBSD, but it was forked from FreeBSD. FreeBSD has a much larger codebase. While HardenedBSD is likely a good project, it would require much more effort for us to deblob and verify licenses of all files.
We decided to use OpenBSD as a base to fork from instead of FreeBSD due to their past commitment to code quality, security, and minimalism.
_**Its FOSS: You mentioned UXP (or [Unified XUL Platform][15]). It appears that you are using [Moonchilds fork of the pre-Servo Mozilla codebase][16] to create a suite of applications for the web. Is that about the size of it?**_
**Andre:** Yes. Our decision to use UXP was for several reasons. We were already rebranding Firefox as Iceweasel for several years to remove DRM, disable telemetry, and apply preset privacy options. However, it became harder and harder for us to maintain when Mozilla kept adding anti-features, removing user customization, and rapidly breaking our rebranding and privacy patches.
After FF52, all XUL extensions were removed in favor of WebExt and Rust became enforced at compile time. We maintain several XUL addons to enhance user-privacy/security which would no longer work in the new engine. We also were concerned that the feature limited WebExt addons were introducing additional privacy issues. E.g. each installed WebExt addon contains a UUID which can be used to uniquely and precisely identify users (see [Bugzilla 1372288][17]).
After some research, we discovered UXP and that it was regularly keeping up with security fixes without rushing to implement new features. They had already disabled telemetry in the toolkit and remain committed to deleting all of it from the codebase.
We knew this was well-aligned with our goals, but still needed to apply a few patches to tweak privacy settings and remove DRM. Hence, we started creating our own applications on top of the toolkit.
This has allowed us to go far beyond basic rebranding/deblobbing as we were doing before and create our own fully customized XUL applications. We currently maintain [Iceweasel-UXP][18], [Icedove-UXP][19] and [Iceape-UXP][20] in addition to sharing toolkit improvements back to UXP.
_**Its FOSS: In a [forum post][21], I noticed mentions of HyperRC, HyperBLibC, and hyperman. Are these forks or rewrites of current BSD tools to be GPL compliant?**_
**Andre:** They are forks of existing projects.
Hyperman is a fork of our current package manager, pacman. As pacman does not currently work on BSD, and the minimal support it had in the past was removed in recent versions, a fork was required. Hyperman already has a working implementation using LibreSSL and BSD support.
HyperRC will be a patched version of OpenRC init. HyperBLibC will be a fork from BSD LibC.
_**Its FOSS: Since the beginning of time, Linux has championed the GPL license and BSD has championed the BSD license. Now, you are working to create a BSD that is GPL licensed. How would you respond to those in the BSD community who dont agree with this move?**_
**Andre:** We are aware that there are disagreements between the GPL and BSD world. There are even disagreements over calling our previous distribution “GNU/Linux” rather than simply “Linux”, since the latter definition ignores that the GNU userspace was created in 1984, several years prior to the Linux kernel being created by Linus Torvalds. It was the two different software combined that make a complete system.
Some of the primary differences from BSD, is that the GPL requires that our source code must be made public, including future versions, and that it can only be used in tandem with compatibly licensed files. BSD systems do not have to share their source code publicly, and may bundle themselves with various licenses and non-free software without restriction.
Since we are strong supporters of the Free Software Movement and wish that our future code remain in the public space always, we chose the GPL.
_**Its FOSS: I know at this point you are just starting the process, but do you have any idea who you might have a usable version of HyperbolaBSD available?**_
**Andre:** We expect to have an alpha release ready by 2021 (Q3) for initial testing.
_**Its FOSS: How long will you continue to support the current Linux version of Hyperbola? Will it be easy for current users to switch over to**_?
**Andre:** As per our announcement, we will continue to support Hyperbola GNU/Linux-libre until 2022 (Q4). We expect there to be some difficulty in migration due to ABI changes, but will prepare an announcement and information on our wiki once it is ready.
_**Its FOSS: If someone is interested in helping you work on HyperbolaBSD, how can they go about doing that? What kind of expertise would you be looking for?**_
**Andre:** Anyone who is interested and able to learn is welcome. We need C programmers and users who are interested in improving security and privacy in software. Developers need to follow the FSDG principles of free software development, as well as the YAGNI principle which means we will implement new features only as we need them.
Users can fork our git repository and submit patches to us for inclusion.
_**Its FOSS: Do you have any plans to support ZFS? What filesystems will you support?**_
**Andre:** [ZFS][22] support is not currently planned, because it uses the Common Development and Distribution License, version 1.0 (CDDL). This license is incompatible with all versions of the GNU General Public License (GPL).
It would be possible to write new code under GPLv3 and release it under a new name (eg. HyperZFS), however there is no official decision to include ZFS compatibility code in HyperbolaBSD at this time.
We have plans on porting BTRFS, JFS2, NetBSDs CHFS, DragonFlyBSDs HAMMER/HAMMER2 and the Linux kernels JFFS2, all of which have licenses compatible with GPLv3. Long term, we may also support Ext4, F2FS, ReiserFS and Reiser4, but they will need to be rewritten due to being licensed exclusively under GPLv2, which does not allow use with GPLv3. All of these file systems will require development and stability testing, so they will be in later HyperbolaBSD releases and not for our initial stable version(s).
* * *
I would like to thank Andre for taking the time to answer my questions and for revealing more about the future of HyperbolaBSD.
What are your thoughts on Hyperbola switching to a BSD kernel? What do you think about a BSD being released under the GPL? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][23].
--------------------------------------------------------------------------------
via: https://itsfoss.com/hyperbola-linux-bsd/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://www.hyperbola.info/
[2]: https://www.hyperbola.info/news/announcing-hyperbolabsd-roadmap/
[3]: https://itsfoss.com/bsd-project-trident-linux/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/hyperbola_linux_bsd.png?ssl=1
[5]: https://patchwork.kernel.org/patch/10084131/
[6]: https://grsecurity.net/
[7]: https://lwn.net/Articles/797828/
[8]: https://www.openbsd.org/
[9]: https://www.xenocara.org/
[10]: https://www.libressl.org/
[11]: https://en.wikipedia.org/wiki/Linux-libre
[12]: https://libertybsd.net/
[13]: https://en.libreware.info/
[14]: https://hardenedbsd.org/
[15]: http://thereisonlyxul.org/
[16]: https://github.com/MoonchildProductions/UXP
[17]: https://bugzilla.mozilla.org/show_bug.cgi?id=1372288
[18]: https://wiki.hyperbola.info/doku.php?id=en:project:iceweasel-uxp
[19]: https://wiki.hyperbola.info/doku.php?id=en:project:icedove-uxp
[20]: https://wiki.hyperbola.info/doku.php?id=en:project:iceape-uxp
[21]: https://forums.hyperbola.info/viewtopic.php?id=315
[22]: https://itsfoss.com/what-is-zfs/
[23]: https://reddit.com/r/linuxusersgroup

View File

@ -1,97 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 things I learned from starting an open source project)
[#]: via: (https://opensource.com/article/20/1/open-source-project)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
7 things I learned from starting an open source project
======
It's not really about project mechanics at all.
![Stickers from all different open source projects and communities][1]
I'm currently involved—heavily involved—in [Enarx][2], an open source (of course!) project to support running sensitive workloads on untrusted hosts. I've had involvement in various open source projects over the years, but this is the first for which I'm one of the founders. We're at the stage now where we've got a fair amount of code, quite a lot of documentation, a logo, and (important!) stickers. The project will hopefully be included in a Linux Foundation group—the [Confidential Computing Consortium][3]—so things are going very well indeed.
I thought it might be useful to reflect on some of the things we did to get things going. To be clear, Enarx is a particular type of project, one that we believe has commercial and enterprise applications. It's also not mature yet, and we'll have hurdles and challenges along the way. What's more, the route we've taken won't be right for all projects, but hopefully, there's enough here to give a few pointers to other projects or people considering starting one up.
The first thing I'd say is that there's lots of help to be had out there. I'd start with Opensource.com, where you'll find [lots of guidance][4]. I'll follow up by saying that, however much of it you follow, you'll still get things wrong. That said, here's my list of things to consider when starting an open source project.
### 1\. Aim for critical mass
I'm very lucky to work at the amazing [Red Hat][5], where everything we do is open source and where we take open source and community very seriously. I've heard it called a "critical mass" company—in order to get something taken seriously, you need to get enough people interested in it that it's difficult to ignore. Enarx's two co-founders—Nathaniel McCallum and I—are both very enthusiastic about the project and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you—we also know we haven't done a good enough job with you on all occasions!) and "selling" it to engineers to get them interested enough that it was difficult to stop.
Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.
### 2\. Create a demo
If you want to get people involved, a demo is great. It doesn't necessarily need to be polished, but it does need to show that what you're doing is possible and that you know what you're doing. For early demos, you may be talking about command-line output; that's fine if what you're providing isn't a user interface (UI) product. Being able to talk about what you're doing and convey both your passion and the importance of the project is a great boon. People like to be able to _see_ or _experience_ something, and it's much easier to communicate your enthusiasm if they have something real that expresses that.
### 3\. Choose a license
Once you have code and it's open source, you want other people to be able to contribute. This may seem like an unimportant step, but selecting an appropriate open source licence[1][6] will allow other people to contribute on well-understood and defined terms, making it easier for them to be involved—and for the organisations for which they work to allow them to do so.
### 4\. Get documentation
You might think that developer documentation is the most important to get out there—otherwise, how will other people get involved in coding? I disagree, at least to start with. For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what's missing. However, if there's no documentation available to explain what it's for and how it's going to help people, then why would anyone bother even looking at it?
This doesn't need to be polished marketing copy, and it doesn't need to be serious, but it does need to convey to people why they should care. It's also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases, and the rest will help convince people that you've thought through the _point_ of your project. We've used a GitHub wiki as our main documentation hub, and we try to update it with new information as we generate it. This is an area, to be clear, where we could do better. But at least we know that.
### 5\. Be visible
People aren't going to find out about you unless you're visible. We were incredibly lucky in that the Confidential Computing Consortium was formed just as we were beginning to get to a level of critical mass, and we immediately had a platform to increase our exposure. We have a [Twitter account][7], I publish articles on [my blog][8], and at Opensource.com, we've been lucky enough to have the chance to publish on Red Hat's [now + Next][9] blog, I've done interviews with the press, and we speak at conferences wherever and whenever we can.
We're very lucky to have these opportunities, and it's clear that not all these approaches are appropriate for all projects, but make use of what you can: the more that people know about you, the more people can contribute.
### 6\. Be welcoming
Let's assume that people have found out about you: what's next? Well, they're hopefully going to want to get involved. If they don't feel welcome, then any involvement they have will taper off quickly. Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you also need ways for contributors to talk to you and for them to feel that they are valued. We have [Gitter channels][10], and our daily standups are open to anyone who wants to join. Recently, someone opened an issue on our [issues database][11], and during the conversation on that thread, it transpired that our daily standup time doesn't work for them (given their time zone), so we're going to ensure that at least once a week it _does_, and we've assured them that we'll accommodate them.
### 7\. Work with people you like
I really, really enjoy meeting and working with the members of the Enarx project team. We get on well, we joke, we laugh, and we share a common aim: to make Enarx successful. I'm a firm believer in doing things you enjoy, where possible. Particularly in the early stages of a project, you need people who are enthusiastic and enjoy working closely together—even if they're geographically separated by thousands of kilometres.[2][12] If they don't get on, there's a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing. You won't always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.
### Conclusion: People
I didn't realise it when I started writing this article, but it's not really about project mechanics at all: it's about people. If you read back, you'll find the importance of people visible in every tip, even the one about choosing a license. Open source projects aren't really about code: they're about people, how they share, how they work together, and how they interact.
I'm certain that your experience of open source projects will vary, and I'd be very surprised if everyone agrees about the top seven things you should do for project success. Arguably, Enarx _isn't_ a success yet, and I shouldn't be giving advice at this stage of our maturity. But when I think back to all of the open source projects that I can think of that _are_ successful, people feature strongly, and I don't think that's a surprise at all.
* * *
1. Or "license," if you're from the US.
2. Or, in fact, miles.
* * *
_This article originally appeared on [Alice, Eve, and Bob a security blog][13] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-project
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/stickers-osdc-lead.png?itok=2RNn132b (Stickers from all different open source projects and communities)
[2]: https://enarx.io/
[3]: https://confidentialcomputing.io/
[4]: https://opensource.com/article/20/1/getting-started-open-source
[5]: https://redhat.com/
[6]: tmp.3liT27tUaE#1
[7]: https://twitter.com/enarxproject
[8]: https://aliceevebob.com/
[9]: https://next.redhat.com/
[10]: https://gitter.im/enarx/
[11]: https://github.com/enarx/enarx/issues
[12]: tmp.3liT27tUaE#2
[13]: https://aliceevebob.com/2019/12/17/7-tips-for-kicking-off-an-open-source-project/

View File

@ -1,86 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beyond Moore's Law: Neuromorphic computing?)
[#]: via: (https://www.networkworld.com/article/3514692/beyond-moores-law-neuromorphic-computing.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Beyond Moore's Law: Neuromorphic computing?
======
Some researchers think brain-copying architectures should replace traditional computing. One group explains how that might work.
4x-image / Getty Images
With the conceivable exhaustion of [Moores Law][1] that the number of transistors on a microchip doubles every two years the search is on for new paths that lead to reliable incremental processing gains over time.
One possibility is that machines inspired by how the brain works could take over, fundamentally shifting computing to a revolutionary new tier, according to an explainer study released this month by Applied Physics Reviews.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“Todays state-of-the-art computers process roughly as many instructions per second as an insect brain,” say [the papers][3] authors Jack Kendall, of Rain Neuromorphics, and Suhas Kumar, of Hewlett Packard Labs. The two write that processor architecture must now be completely re-thought if Moores law is to be perpetuated, and that replicating the “natural processing system of a [human] brain” is the way forward.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Deep neural networks (DNNs) should be the foundation, the group believes. A DNN is basically dynamic deep learning where layers pull high- and low-level detail features (edges and shapes, for example) from data. Kendall and Kumar explain that a human brain, which DNN copies, can sort through massive datasets and generally identify data  better than a traditional computer, so therefore it should be the starting point.
This kind of thing is being attempted already. Existing artificial intelligence (AI) is a stab at getting computers to learn like a human brain. Much like the brain, AI engines learn from patterns in data. Algorithms are combined with processing power, and rewards are dished out when the machine gets it right.
A brain-inspired neuromorphic computer, however, would take computing a step further, the team believes. Neuromorphic computing mimics neuro-biological architectures in a kind of hybrid digital-analog circuit, in a way like a body does biologically.
The group says that they think there are 10 basics that need to be gotten right to get to this next level:
**Parellelism** Similar to how a brain works rapidly, numerous mathematical operations must be made to occur simultaneously. Its an extension of what we see now in graphical processing units (GPUs) where large scale graphics are created using concurrent calculations called matrix multiplications.
**In-memory computing** It wastes resources to fetch data from remote places, and human brains, indeed, dont do that; they store information in the same synapses that perform the thought. The introduction of electronic processing semiconductors that combine memory  [Memristors ][5] could help here. (I wrote a few weeks ago about [progress being made combining transistors with storage][6]. That combo could have similar resource advantages.)
**Analog computing** Numbers are analog, not digital, the authors point out. Most real-world numbers arent zeros and ones, so, for efficiency, any new computing architecture needs to accept that concept, adapt and handle the inherent precision problems that result.
**Plasticity** Real-time tuning needs to take place to account for things changing.
**Probabilistic computing** The authors suggest computers should get less precise, just like the human brain. Coming up with certain degrees of probability is faster than precise calculation, and it requires less information.
**Scalability** The depth of the network allows for complexity. By introducing more layers, one gains more scaling.
**Sparsity** Large-scale networks, including neural computers, cant connect every node, just as not all neurons are connected to each other in the brain. Its a redundancy that wastes resources. Hub-and-spoke topology works better and allows for better scaling. The same should happen in the next computers, the researchers say.
**Learning (credit assignment)** The adjustment of synaptic weights (the strength and amount of influence synapses have) needs attention related to new information presented.
**Causality** The relationship between cause and effect in a result has to be addressed. Causal interference is a problem, and machine learning generally has had problems with getting this bit right.
**Nonlinearity** The brain isnt linear like a computer is. “The brain operates at the edge of chaos to produce the most optimal learning and computation,” the team says. The next computer architecture needs to encompass that brain-like nonlinearity, but also operate within linearity, like todays electronics.
“Our present hardware is not able to keep up,” Kendall and Kumar say in their paper, which also looks at materials. “The future of computing will not be about cramming more components on a chip but in rethinking processor architecture,” which should be neuromorphic.
**Now see** [10 of the world's fastest supercomputers][7]
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3514692/beyond-moores-law-neuromorphic-computing.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3189051/its-time-to-dump-moores-law-to-advance-computing-researcher-says.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://aip.scitation.org/doi/10.1063/1.5129306
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/2931818/brain-uploads-coming-as-pcs-get-more-powerful.html
[6]: https://www.networkworld.com/article/3510638/researchers-aim-to-build-transistors-that-can-compute-and-store-information-in-one-component.html
[7]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Never enough: Working openly with anxiety)
[#]: via: (https://opensource.com/open-organization/20/1/leading-openly-anxiety)
[#]: author: (Sam Knuth https://opensource.com/users/samfw)
Never enough: Working openly with anxiety
======
Open organizations reward initiative. For leaders with anxiety, that may
fuel some exhausting patterns.
![A graph of a wave.][1]
_Editor's note: This article is part of a series on working with mental health conditions. It details the author's personal experiences and is not meant to convey professional medical advice or guidance._
Something in [a recent podcast interview][2] with food writer Melissa Clark caught my ear. Asked if she was a "productive person,'' Clark replied by saying: "I am an anxious person. I have a lot of anxiety, and my way of coping with it is to be very, very busy […]. That's how I deal with the world. I [pause] _do_."
Clark's point resonated with me, because I live with multiple mental health conditions that have a fundamental impact on how I approach my work. And I believe they've played a role in the career success I've experienced.
I have [generalized anxiety disorder][3] and [obsessive-compulsive disorder][4] (OCD). Both of these conditions have had serious impacts on my life. Some of those impacts have been disruptive. At the same time, some have contributed positively to my success and my development as a leader in a growing organization.
I've spent most of my career in an organization built on openness and transparency, and yet I have rarely spoken about my mental health and how it might impact my work. In sharing these stories now, I hope to help reduce the [stigma of mental health at work][5] and connect with others who may be experiencing similar or related situations. Given the prevalence of [mental illness globally][6], chances are good that if you don't experience a mental health condition first hand, then you're likely working on a daily basis with someone who does.
Learning about how mental illness manifests at work may help you navigate relationships with others as well as your own challenges. As a leader in an open organization, I feel compelled to share my experiences in the hope that they are useful to others. Working openly has specific implications for me—and, I suspect, for others with similar mental health conditions—which I'll detail in this series.
### How it started
My anxiety and OCD started shortly after I graduated from college and was living in New York City (though I could probably trace their histories further, this was the moment when they became apparent). The wave of confidence I rode during the dot com boom was crushed in the bubble-bursting crash in March of 2001. The memory of coming into work, being called into an all hands meeting (the first we'd ever had at my small company), and being told that as of today there was no money to make payroll, is etched into my mind.
My girlfriend and I had just moved into a $2100-per-month apartment. Fear of not being able to pay the rent, or being otherwise swallowed up by the city, resulted in a general sense of unease and nervousness in my gut, combined with very real symptoms of OCD.
For example, while walking to work at a temp job I took after my company folded, I would wonder if I had remembered to lock the apartment door. I would retrace the steps of my morning routine in my mind, trying to find that moment when I turned the key. If I couldn't specifically remember, the sense of unease and worry in my gut would build to the point that I couldn't think about anything else. Frequently, I would turn around, rush back home, and double check that the door was locked. When I did so, I would have to do something memorable, like repeat a phrase out loud, so that I could mark the moment in my memory. Then, back on the way to work, I would again wonder if I had locked the door, and I could say "Yes, and when you did it, you said out loud 'It's Tuesday morning and I'm locking the door!'"
I've spent most of my career in an organization built on openness and transparency, and yet I have rarely spoken about my mental health and how it might impact my work.
### So, how does this translate to an advantage at work?
One of the primary factors contributing to success at my company is an ability to take initiative. Much work needs to be done, and we're in an environment of continual growth and change—which means it's not always clear _what_ needs to be done or _who_ should be doing it. That creates the opportunity for people to observe a need then step up to fill it. This is true in many open organizations, where everyone, regardless of job title or level, is encouraged to step forward.
Living with anxiety, I continually feel like I need to be doing something, or worry that I'm not doing enough. This motivates me to seek opportunities for contributing. In other words, the anxiety makes me proactive. Being "proactive" or a "self starter" is something you'll find in the "qualifications" section of many job postings!
I'm very fortunate to have built my career at a successful company, where continual growth creates financial incentives. One of my largest anxieties is about money—the fear of not having enough of it. Being in a leadership role at a quickly growing, profitable company exponentially multiplies what I call the anxiety performance loop (see Figure 1). A high quarterly bonus or other financial reward for a job well done is an invitation to do more, to raise the bar higher, to double down on the behaviors that seem to provide positive outcomes at work. Quarter after quarter after quarter.
![][7]
You can observe in all this a virtuous cycle: Opportunities I find at work satisfy my mental needs, and as a result I experience success and rewards. And this, on the face of it, is true.
So, what's the problem?
The anxiety-driven performance loop presents two challenges: it never ends, and it is based on a negative emotional state (fear and worry).
Perhaps the best phrase to illustrate this would be "What have you done for me lately?" In my mental landscape, this is what everyone is thinking about me all the time. No matter what I achieve, no matter what reward or recognition I receive, I imagine that within minutes the person acknowledging my achievement is thinking, "Now, why are you still sitting there? Get out and go do some more!"
People are not, of course, really thinking this. But my mind can locate enough truth in it to justify a quick return to the fear of not doing enough, which restarts the cycle.
The anxiety-driven performance loop presents two challenges: it never ends, and it is based on a negative emotional state (fear and worry).
We live in a world of short attention spans, high expectations, and significant competitive pressures. All of these are real challenges that fuel the idea that after each accomplishment we need to raise the bar higher and keep going. Having anxiety causes me to internalize these pressures, which triggers the "looping" effect.
The result is that both my company and my career benefit. Mentally, though, I get exhausted.
I have developed a few coping mechanisms to help me maintain balance:
* **Mediation.** After I was first diagnosed with my conditions almost 20 years ago, I saw a therapist. After a round of sessions, the therapist referred me to a mediation center, which opened up a new world of thought for me. I've recently been working to reinvigorate my daily practice.
* **Exercise.** I'm a bit compulsive about exercise. I make time every single day for at least one hour of exercise (for me it's walking, cross country skiing, or running).
* **Self awareness, reality checks, and reminders.** Anxiety and OCD can lead to a distorted view of reality. I may overstate stakes, read too much into other people's motivations, or imagine consequences that are just not realistic. Reminding myself of the _true_ worst case scenario (which usually isn't that bad), realizing that other people have more important things to worry about than me, or reminding myself that this is "just a job" can all help bring me back to a realistic perspective. I also have a few other people who can help with this.
So far I've focused primarily on the performance-enhancing aspects of anxiety. In future articles, I'll discuss some of its performance-reducing aspects, as well as the impact my condition has on my colleagues.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/1/leading-openly-anxiety
作者:[Sam Knuth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/samfw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_wavegraph.png?itok=z4pXCf_c (A graph of a wave.)
[2]: http://www.third-story.com/listen/melissaclark
[3]: https://www.mayoclinic.org/diseases-conditions/generalized-anxiety-disorder/symptoms-causes/syc-20360803
[4]: https://www.mayoclinic.org/diseases-conditions/obsessive-compulsive-disorder/symptoms-causes/syc-20354432
[5]: https://www.bloomberg.com/news/articles/2019-11-13/mental-health-is-still-a-don-t-ask-don-t-tell-subject-at-work
[6]: https://ourworldindata.org/mental-health
[7]: https://opensource.com/sites/default/files/images/open-org/anxiety_performance_loop.png

View File

@ -1,67 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What you need to know about System76's open source firmware project)
[#]: via: (https://opensource.com/article/20/1/system76-open-source-firmware)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
What you need to know about System76's open source firmware project
======
This Q&A with System76 principal engineer Jeremy Soller discusses the
company's project for an open source embedded controller.
![Person using a laptop][1]
When you power on your computer, theres a lot more going on than you might think. One of the most important elements involved is the embedded controller (EC). This is what is responsible for providing abstractions for the battery, charging system, keyboard, touchpad, suspend/resume, and thermal control, among others. These controllers are typically proprietary and usually run proprietary firmware.
System76 is about to change that paradigm. Recently, the company adopted [coreboot][2] for their Galago Pro and Darter Pro laptop models. Now they intend to extend the open source approach to the EC. There is a project associated with Chrome OS devices called [Chromium EC][3] that is open source; however, it is only available for Chromebooks and specific EC chips. System76 wanted to supply their customers with an open source embedded controller firmware, too.
They had to start from scratch with a project that can compile for the EC architecture they have in their laptops, the [Intel 8051][4]. Their project for an [open source EC][5], the System76 EC, is a GPLv3-licensed embedded controller firmware for System76 laptops. It is designed to be portable to boards using several 8-bit microcontrollers. This project has grown to the point where it is now possible to boot a System76 Galago Pro and have the battery, keyboard, touchpad, suspend/resume, and thermal control mentioned earlier.
Eager to learn more, I emailed [Jeremy Soller][6], who is Principal Engineer at System76, for a deeper dive. Below are some highlights from our conversation.
### Q: What is the importance of the Intel 8051? Do all laptops use that chipset?
A: The embedded controller in our laptops, the ITE IT8587E, uses the Intel 8051 instruction set. Not all laptops use this instruction set, but many do. This is important because we need a toolchain that can compile firmware for the 8051 instruction set, as well as firmware that is written for that toolchain.
### Q: What is involved in writing open code to utilize the Intel 8051?
A: Mostly we have to define the registers for utilizing hardware on the embedded controller. There are protocols like SMBus and PECI that are implemented in hardware and need drivers for them. These drivers often have to be written for each embedded controller to abstract its hardware, so there is a common interface. Our EC firmware has abstractions for some Arduinos as well as the EC in our laptops, so we can write firmware that is portable.
### Q: Google developed an open EC. Why not fork that project?
A: Our initial concept was to utilize Chromium EC for our open EC firmware, but this was not possible. After discussions with members of the team at Google working on it, it became clear that the firmware was not capable of being ported to 8-bit microcontrollers like the 8051 used in our EC, or the AVR used in many Arduinos. It was mostly targeted to ARM microcontrollers. We mutually concluded that it was better to start a new project targeting 8-bit microcontrollers, which is a new codebase that is GPLv3, as opposed to the BSD license used by Chromium EC.
### Q: How significant is it that System76 is open sourcing the code?
A: The only other x86_64 laptops with open source EC firmware are certain Chromebooks using Chromium EC. However, these laptops have poor support for full desktop Linux distributions such as Ubuntu. We are providing users of our laptops with significant capabilities to view and modify the behavior of the laptop to their needs, all while running a full desktop operating system. When it is paired with our open system firmware, there is very little that a user cannot do with one of these laptops.
### Q: What implications does open code have for firmware and other developers?
A: I strongly believe that open EC firmware will be just as important for hardware customization as open system firmware. The user can adjust keyboard mappings, change fan curves, modify battery charging settings, and more. The most exciting thing about this is that I cannot predict all that is possible with this change. Many of the components in the system are tied to the EC firmware. Having the ability to change the EC and system firmware means these components could potentially be modified in a large number of different, unpredictable ways.
### Q: What is really important about developing software for this EC, and what sets it apart?
A: Something particularly important is that the EC we are using is the IT8587E, and its instruction set architecture is Intel 8051. Chromium EC cannot be compiled for the 8051, due to being targeted toward 32-bit microcontrollers. Our project aims to support the ubiquitous 8-bit microcontrollers from many vendors, as well as Arduinos for easy prototyping. In addition, this unifies the work we were doing on [Thelio Io][7] with the work we have done on laptop firmware.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/system76-open-source-firmware
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://opensource.com/article/19/11/coreboot-system76-laptops
[3]: https://chromium.googlesource.com/chromiumos/platform/ec/+/master/README.md
[4]: https://en.wikipedia.org/wiki/Intel_MCS-51
[5]: https://github.com/system76/ec
[6]: https://www.linkedin.com/in/jeremy-soller-0475a117/
[7]: https://opensource.com/article/18/11/system76-thelio-desktop-computer

View File

@ -1,102 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Survey: Digital transformation can reveal network weaknesses)
[#]: via: (https://www.networkworld.com/article/3516030/survey-digital-transformation-can-reveal-network-weaknesses.html)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Survey: Digital transformation can reveal network weaknesses
======
When enterprises embraced digital transformation, some found their existing networks had a limited ability to address cloud connectivity or access for mobile users.
Metamorworks / Getty Images
[Digital transformation][1] is a catch-all phrase that describes the process of using technology to modernize or even revolutionize how services are delivered to customers. Not only technology but also people and processes commonly undergo fundamental changes for the ultimate goal of significantly improving business performance.
Such transformations have become so mainstream that IDC estimated that 40% of all technology spending now goes toward digital transformation projects, with enterprises spending in excess of $2 trillion on their efforts through 2019.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
Every companys digital transformation project is unique. Whether its transforming a companys marketing and sales processes by using machine learning to garner deep insights on each and every customer, or building a seamless experience across sales channels and revamping distribution channels to provide the best products and resources to customers, a digital-transformation project is going to be dependent on as well as have an impact on the enterprises network infrastructure.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Many companies assume their networks can handle these changes. But can they? Do these new ways of working strain the existing network infrastructure by imposing new requirements for agility, cloud access, security and mobility?
### Gauging Confidence in the Network Post-Digital Transformation
A recent survey of more than 1,300 IT professionals dared to ask about the impact of digital transformation on each respondents enterprise network. Having just conducted its fourth annual state of the WAN survey at the end of 2019, Cato Networks issued the report, [Networking in 2020: Understanding Digital Transformations Impact on IT Confidence in Enterprise Networks][4]_._ Steve Taylor, publisher and editor-in-chief of Webtorials.Com, and Dr. JimMetzler, principal at Ashton,Metzler, and Associates, were instrumental in designing and the analyzing the results from the portion of the survey relating to digital transformation. There are some worthy observations here for network managers.
The study looked at networking and security priorities for IT professional in 2020. As part of that process, the study sought to identify how ready enterprise networks are for the digital era. According to the report, “The modern business has data and users residing everywhere. And just as the enterprise network provided performance and security to data centers and branch offices in the past, so, too, it must provide performance and security to the cloud and mobile users—both hallmarks of digital initiatives.” Without a network that delivers the right infrastructure with the right performance and security levels anywhere, digital transformation efforts can run aground.
1,333 respondents took part in the survey in late 2019. Qualified respondents were those who work in IT and are involved in the purchase of telco services for enterprises with an SD-WAN or MPLS backbone (or a mix of MPLS and Internet VPN). The vast majority of the respondents say they are moderately or extremely involved in their organizations digital transformation initiatives.
More than half of respondents identified working for companies with a global or regional footprint. Nearly half of respondents work for companies with more than 2,500 employees. The vast majority said their organization spans 11 or more locations, with a quarter of the respondents from companies with more than 100 locations. All respondents companies have some cloud presence and most have two or more physical [data centers][5].
To gauge the impact of digital transformation on the network, the survey asked a number of qualitative questions pertaining to network characteristics that include agility, security, performance, and management and operations. For each characteristic, the study looked at the “network confidence level”; that is, whether the respondent feels more or less confident in the networks capabilities in that area following the deployment of the transformation project. The study segmented respondents by the type of network they operate—[MPLS][6], hybrid (MPLS and Internet-based VPN), [SD-WAN][7], or [SASE][8] (secure access service edge, pronounced “sassy”). SASE converges SD-WAN and other networking capabilities and a complete security stack into a global, cloud-native platform. (Disclosure: Report publisher Cato Networks delivers an SD-WAN service and also bills itself as the worlds first SASE platform.)
**Overall Findings**
Ill get to the results about network confidence level in a moment. First lets look at some general information disclosed in the report:
* Budgets are growing in 2020. Respondents report that both their network and their security budgets are expected to grow in 2020. Thats good news, considering both areas are being asked to do more.
* Site connectivity continues to drive the major networking challenges for 2020. This includes bandwidth costs, performance between locations, and managing the network.
* Mobility is becoming strategic for network buyers. The importance of managing mobile and remote access grew significantly since the last annual survey. Addressing this need has become another top networking challenge.
* Security is an essential consideration for [WAN][9] transformation. Enterprises must have a multi-edge security strategy that includes defending against emerging threats like malware/ransomware, enforcing corporate security policies on mobile users, and full awareness of the cost of buying and managing security appliances and software.
* The most critical applications are now in the cloud. More than half (60%) of all respondents indicate that their organizations most critical applications will be hosted in the cloud over the next 12 months. This has a huge impact on how users will access the cloud via their WAN.
* Digital initiatives are driving a rethinking of legacy networks. More than half of the respondents whose organizations still rely on MPLS say their organizations are actively planning to deploy SD-WAN in the next 12 months to lower costs and support new business initiatives.
### Digital transformations rattle network confidence
To better understand why enterprises are abandoning MPLS and what lessons can be derived for any WAN transformation initiatives, respondents were asked to rate a series of statements evaluating their perceptions of their networks agility, management and operations, performance, and security. The respondents were then grouped by their network in order to assess the change in network confidence pre- and post-digital transformation.
With one exception, respondents express lower confidence in their networks post-digital transformation. This is true in areas of MPLSs presumed strength, such as performance, and its even true for hybrid networks as well as for SD-WAN. As organizations roll out digital initiatives, they uncover the weaknesses in their existing networks, such as a limited ability to address cloud connectivity or mobile user access.
According to the report, the only exception is when respondents run a SASE architecture. They express greater confidence post-digital transformation. SASEs convergence of SD-WAN with security, cloud connectivity, and mobility is well suited for digital transformation but may only be appreciated when required by the business.
Going back to the network characteristics of agility, security, performance, and management and operations, lets look at how each one is perceived in terms of respondents network confidence.
* Network agility This characteristic includes the ability to add new sites, adjust available bandwidth, add cloud resources, and generally adapt quickly to changing business needs. Its understandable that respondents with an MPLS-based network would rate their confidence in network agility as low, but confidence among respondents who deployed an SD-WAN dropped the most when asked about rapidly delivering new public cloud infrastructure. The opposite is also true: SASEs built-in cloud connectivity is a major factor in respondents being more confident in their network agility post-digital transformation.
* Security Its critical to protect users and resources regardless of the underlying network. MPLS does not protect resources and users, and certainly not those connected to the Internet, leading MPLS-only respondents to be significantly less confident in their networks security post-digital transformation. SD-WAN respondents also demonstrate lower confidence in security post-digital transformation, largely because SD-WAN on its own fails to restrict access to specific applications or provide the advanced security tools needed to protect all edges mobile devices, sites, and cloud resources. By contrast, SASE confidence grew post-digital transformation. Converging a complete security stack into the network allows SASE to bring granular control to sites, mobile, and cloud resources.
* Performance Delivering cloud resources presents problems for MPLS and SD-WAN. Users expect their cloud application experience to be as responsive as on-premises applications. This point plays a significant role in performance confidence. When asked if respondents can provide access to cloud-based resources with the performance and availability comparable to internally hosted resources, respondents with MPLS, hybrid WAN, and SD-WAN networks showed significant drop-off in confidence post-digital transformation. On the other hand, SASE solutions that include native cloud optimization improve cloud performance out-of-the-box, making those network owners more confident that they can deliver what users need.
* Management and operations Respondent confidence was high before digital transformation but dropped off post-digital transformation across all network types except for SASE. According to the report, the lack of redundant last mile connections with MPLS leaves sites susceptible to cable cuts and other last mile problems. Adding Internet VPNs to MPLS improves last mile access but still does not allow organizations to automatically overcome last mile issues without downtime. SD-WAN and SASE are better able to overcome last mile issues with active/active configurations.
Digital transformation initiatives can vastly change network traffic patterns, bandwidth requirements, access locations, and security needs. These changes might not be apparent until the project is fully deployed. Every organization needs a network infrastructure that provides adequate performance, security, agility and manageability to support digital initiatives, now and into the future. Some network architectures are better at providing those characteristics than others are. IT organizations that want to be confident in their networks ability to meet the future need to consider areas such as cloud, mobility and especially security when transforming their WANs today.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516030/survey-digital-transformation-can-reveal-network-weaknesses.html
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3512830/how-to-deal-with-the-impact-of-digital-transformation-on-networks.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://go.catonetworks.com/Survey-Networking-in-2020.html
[5]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[6]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html
[7]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[8]: https://www.networkworld.com/article/3453030/sase-is-more-than-a-buzzword-for-bioivt.html
[9]: https://www.networkworld.com/article/3248989/what-is-a-wan-wide-area-network-definition-and-examples.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,54 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Gartner: Data-center spending will inch up this year)
[#]: via: (https://www.networkworld.com/article/3515314/data-center-spending-will-inch-up-in-year-of-accelerated-it-investment-gartner.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Gartner: Data-center spending will inch up this year
======
After a down year in 2019, IT spending will pick up this year, led by enterprise software and the cloud, according to IT research firm Gartner, including an uptick for data-center spending that had takend a dip.
Getty Images
Global IT spending could reach $3.865 trillion in 2020, up 3.4% over 2019, according to newly released data from IT research firm Gartner. In comparison, 2019 saw just 0.5% growth over 2018 levels. Spending is expected to continue to climb into 2021, surpassing the $4 trillion mark with 3.7% growth.
Spending on hardware including edge devices and data center hardware will be deemphasized, while investments in software and services, including cloud, will see an increase, the firm predicts.
**READ MORE:** [Data centers in 2020 will feature greater automation, cheaper memory][1]
"With the waning of global uncertainties, businesses are redoubling investments in IT as they anticipate revenue growth, but their spending patterns are continually shifting," said John-David Lovelock, distinguished research vice president at Gartner, in a statement.
After a decline of 2.7% in 2019, data center systems sales will grow 1.9% in 2020, while devices everything from laptops to printers to smartphones will rise just 0.8% in 2020 after a 4.3% decline in 2019.
IT services will rise 5.0%, increasing its momentum over 2019, which saw a rise of 3.6%. But the real action will be in enterprise software, which is expected to grow 10.5% this year. This includes both on-premises software (such as Microsoft, Oracle) and cloud services. More of the spending growth is aimed at SaaS than on-premises software, Gartner notes.
"Almost all of the market segments with enterprise software are being driven by the adoption of software as a service (SaaS)," Lovelock said. "We even expect spending on forms of software that are not cloud to continue to grow, albeit at a slower rate. SaaS is gaining more of the new spending, although licensed-based software will still be purchased and its use expanded through 2023."
In a conference call with clients, Lovelock said there has been a shift over the last three years, where the world is going from "'we like all tech' to 'we like softer tech and not all tech.'" The trend is toward consulting, software, and the cloud, the softest of tech.
The weakest segment is mobile devices. It's not that people dont want them any longer, but in the mobile device space there is no more must-have feature, nothing to make people line up for days in advance like we saw a decade ago with each new iPhone release. "People are happy with the devices they have. The market is down to a replacement market so they extend their spending," Lovelock said.
In data center space, there's a similar pattern. Servers last longer, and at the same time, more work is being done outside the company at colocations and the cloud.
"The cloud is taking a lot [of money] out of the data center," Lovelock said. "SaaS and IaaS are all viable for organizations but taking data center dollars. Where we keep saying software is growing most quickly, thats very true. But recognize that it is also taking money from other areas. Budgets arent going up, concentrations in spending is where we are seeing things happen."
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3515314/data-center-spending-will-inch-up-in-year-of-accelerated-it-investment-gartner.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[zxy-wyx](https://github.com/zxy-wy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3487684/data-centers-in-2020-automation-cheaper-memory.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Open-RAN could white-box 5G)
[#]: via: (https://www.networkworld.com/article/3516075/how-open-ran-could-white-box-5g.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
How Open-RAN could white-box 5G
======
Open-hardware, software-defined mobile radio infrastructure could kick-start private LTE and 5G and perhaps eventually lead to their supremacy over Wi-Fi for enterprises.
mrdoomits / Getty Images
One of Britains principal mobile networks, O2, has just announced that it intends to deploy Open Radio Access Network technology (O-RAN) in places.
O-RAN is a wireless industry initiative for designing and building radio network solutions using “a general-purpose, vendor-neutral hardware and software-defined technology,” explains Telecom Infra Project, the body responsible, on its website.
TIP is the trade body that, along with Intel and Vodafone, conceived of the technology alternative an attempt at toppling the dominance of Ericsson, Huawei and Nokia, which provide almost all mobile telco infrastructure now.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
O2 joins fellow UK mobile operator Vodafone, which is also experimenting with O-RAN.
O2 is working with partners including; Mavenir, DenseAir and WaveMobile to introduce O-RAN solutions, [the Telefónica-owned network says in a press release][2].
What it means by that is that by encouraging less powerful, smaller vendors to provide infrastructure, the grip that Ericsson, Huawei and Nokia hold over mobile networks might be lessened. Costs could be reduced because those big three would have to reduce prices to remain competitive.
But most interestingly, it also allows for the standardizing of telco infrastructure, possibly making future private mobile networks cheaper and easier to implement. Private LTE and 5G networks are expected to genearate $4.7 billion in revenue by the end of this year, [according to an SNS Telecom &amp; IT report published in October][3]. That number is expected to be $8 billion by the end of 2023.
**[ Dont miss [customer reviews of top remote access tools][4] and see [the most powerful IoT companies][5] . | Get daily insights by [signing up for Network World newsletters][6]. ]**
Indeed, white-box telco equipment might be the result. White-box IT hardware is used in enterprises, and could be advantageous in telco equipment, too. Conceivably, as telco equipment prices and availability became more within reach, along with the availability of new, unlicensed, shared spectrum, such as is being launched in the U.S. with Citizen Broadband Radio Service; then implementation of a an enterprise-level, private LTE or 5G network, with “white-box” hardware and programmable software-defined networks, may be one-day no harder than a Wi-Fi network install, common now.
“By providing authority over wireless coverage and capacity, private LTE and 5G networks ensure guaranteed and secure connectivity, while supporting a wide range of applications,” SNS Telecom &amp; IT says of private mobile networks in its report. Factory robotics and IoT sensor  networks will be driving that investment. LTE and 5G are being pitched as more reliable than Wi-Fi, in part because of less congestion. Private mobile networks can be provided by existing mobile-network operators or built independently.
In the case of TIPs O-RAN, the vision is for modular base stations with a software stack functioning on common, off-the-shelf hardware, called COTS. Field-programmable gate arrays (FPGAs) are also part of the concept.
“Thus the main objective of this project is to have RAN solutions that benefit from the flexibility and pace of innovation associated with software-driven developments on fully programmable platforms,” [TIP said on its website][7] last year. The influence of Wi-Fi diminishes.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516075/how-open-ran-could-white-box-5g.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://news.o2.co.uk/press-release/o2-to-further-improve-network-service-for-customers-using-open-radio-access-network-ran-technology/
[3]: https://www.snstelecom.com/private-lte
[4]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
[5]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[7]: https://telecominfraproject.com/tip-project-group-feature-openran/
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,60 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IBM Power-based cloud instances available… from Google)
[#]: via: (https://www.networkworld.com/article/3516409/ibm-power-based-cloud-instances-available-from-google.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
IBM Power-based cloud instances available… from Google
======
IBMs high-performance RISC processors are now available to Google Cloud Platform users as a service.
Oak Ridge National Laboratory
IBM and Google may be competitors in the cloud platform business, but that doesn't prevent them from working together. Google is partnering with IBM to offer "Power Systems as a service" on its Google Cloud platform.
IBMs Power processor line is the last man standing in the RISC/Unix war, surviving Sun Microsystems SPARC and HPs PA-RISC. Along with mainframes its the last server hardware business IBM has, having divested its x86 server line in 2014.
IBM already sells cloud instances of Power to its IBM Cloud customers, so this is just an expansion of existing offerings to a competitor with a considerable data center footprint. Google said that customers can run Power-based workloads on GCP on all of its operating systems save mainframes — AIX, IBM i, and Linux on IBM Power.
This gives GCP customers the option of moving legacy IT systems running on IBM Power Systems to a hybrid cloud and the option of using Google or IBM, which have their respective strengths. IBM is focused on IBM customers, while Google is more focused on containerization, AI and ML, and low latency.
IBM gains because its customers now have a second option, and customers like choice. GCP wins because it gives the company access to legacy IBM customers, something it never had as a relatively new company. It has no on-premises legacy, after all.
"For organizations using a hybrid cloud strategy, especially, IBM Power Systems are an important tool. Because of their performance and ability to support mission critical workloads—such as SAP applications and Oracle databases—enterprise customers have been consistently looking for options to run IBM Power Systems in the cloud," wrote Kevin Ichhpurani, GCP's corporate vice president of global ecosystem in a [blog post][1] announcing the deal.
"IBM Power Systems for Google Cloud offers a path to do just that, providing the best of both the cloud and on-premise worlds. You can run enterprise workloads like SAP and Oracle on the IBM Power servers that youve come to trust, while starting to take advantage of all the technical capabilities and favorable economics that Google Cloud offers," Ichhpurani added.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
Ichhpurani also noted several other benefits for customers:
* Integrated billing: GCP customers can deploy the solution through the Google Cloud Marketplace and get a single bill for their GCP and IBM Power use.
* Private API access: IBM Power resources can access Google Clouds Private API Access technology securely and at low latency
* Integrated customer support: Customer support for both GCP and IBM have a single point of contact for any issues.
* Rapid deployment: An intuitive new management console enables quick ramp-up and rapid deployment of the solution.
IBM Power is available to GCP customers now.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516409/ibm-power-based-cloud-instances-available-from-google.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://cloud.google.com/blog/products/gcp/ibm-power-systems-now-available-on-google-cloud
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -1,117 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Some Useful Probability Facts for Systems Programming)
[#]: via: (https://theartofmachinery.com/2020/01/27/systems_programming_probability.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Some Useful Probability Facts for Systems Programming
======
Probability problems come up a lot in systems programming, and Im using that term loosely to mean everything from operating systems programming and networking, to building large online services, to creating virtual worlds like in games. Heres a bunch of rough-and-ready probability rules of thumb that are deeply related and have many practical applications when designing systems.
## (\frac{1}{N}) chance tried (N) times
Retries are everywhere in systems programming. For example, imagine youre designing a world for a roleplaying game. When monsters in one particular forest are defeated, they have a small chance of dropping some special item. Suppose that chance is (\frac{1}{10}). Basic probability says that players are expected to need 10 victories before gaining a special item. But in probability, “expected value” is just a kind of average value, and theres no guarantee that players will get the item even if they do win 10 battles.
Whats a players chance of getting at least one special item after 10 battles? Lets just try getting a computer to calculate the probability of getting at least one success for (N) tries at a (\frac{1}{N}) chance, for a range of values of (N):
![Plot of probability of at least one success after trying a 1/N chance N times. The probability starts at 100% and drops, but quickly flattens out to a value just below 65%.][1]
Eyeballing the graph, it looks like the probability is roughly constant for (N) greater than about 3. In fact, it converges on (1 - e^{- 1}), where (e) is Eulers constant, 2.718281828459… This number (along with (e^{- 1})) is extremely common in engineering, so heres a table of practical values:
exp(-1.0) | 1.0-exp(-1.0)
---|---
36.7% | 63.2%
A bit over a third | A bit under two thirds
So, if you dont need an exact answer (and you often dont) you could just say that if the drop probability is (\frac{1}{10}), most players will get a special item within 10 battles, but (close enough) about a third wont. Because the monster battles are statistically independent, we can make a rough guess that about 1 in 9 players still wont have a special item after 20 battles. Also, roughly 60% wont after 5 battles because (0.6 \times 0.6 = 0.36), so 60% is close to the square root of 36.7%.
> If you have (\frac{1}{N}) chance of success, then youre more likely than not to have succeeded after (N) tries, but the probability is only about two thirds (or your favourite approximation to (1 - e^{- 1})). The value of (N) doesnt affect the approximation much as long as its at least 3.
Heres the proof: The chance of the player failing to get a special item after all (10) battles is ((\frac{9}{10})^{10} = .349), so the chance of getting at least one is (1 - .349 = .651). More generally, the chance of succeeding at least once is (1 - (1 - \frac{1}{N})^{N}), which converges on (1 - e^{- 1}) (by [one of the definitions of (e)][2]).
By the way, this rule is handy for quick mental estimates in board games, too. Suppose a player needs to get at least one 6 from rolling 8 six-sided dice. Whats probability of failure? Its about (\frac{1}{3}) for the first 6 dice, and (\frac{5}{6} \times \frac{5}{6}) for the remaining two, so all up its about (\frac{1}{3} \times \frac{25}{36} \approx \frac{8}{36} \approx \frac{1}{4}). A calculator says ((\frac{5}{6})^{8} = .233), so the rough approximation was good enough for gameplay.
## (N) balls in (N) buckets
Ill state this one up front:
> Suppose you throw (N) balls randomly (independently and one at a time) into (N) buckets. On average, a bit over a third ((e^{- 1})) of the buckets will stay empty, a bit over a third ((e^{- 1}) again) will have exactly one ball, and the remaining quarter or so ((1 - 2e^{- 1})) will contain multiple balls.
The balls-and-buckets abstract model has plenty of concrete engineering applications. For example, suppose you have a load balancer randomly assigning requests to 12 backends. If 12 requests come in during some time window, on average about 4 of the backends will be idle, only about 4 will have a balanced single-request load, and the remaining (average) 3 or 4 will be under higher load. Of course, all of these are averages, and therell be fluctuations in practice.
As another example, if a hash table has (N) slots, then if you put (N) different values into it, about a third of the slots will still be empty and about a quarter of the slots will have collisions.
If an online service has a production incident once every 20 days on average, then (assuming unrelated incidents) just over a third of 20-day periods will be dead quiet, just over a third will have the “ideal” single incident, while a quarter of 20-day periods will be extra stressful. In the real world, production incidents are even more tightly clustered because theyre not always independent.
This rule of thumb also hints at why horizontally scaled databases tend to have hot and cold shards, and why low-volume businesses (like consulting) can suffer from feast/famine patterns of customer demand.
Random allocation is much more unbalanced than we intuitively expect. A famous example comes from World War II when, late in the war, the Germans launched thousands of V-1 and V-2 flying bombs at London. Hitting a city with a rocket from across the Channel already required pushing 1940s technology to new limits, but several British analysts looked at maps of bomb damage and concluded that the Germans were somehow targetting specific areas of London, implying an incredible level of technological advancement. In 1946, however, an actuary did the proper statistical analysis and said that, no, [the clustering of bomb damage was simply what youd expect from random chance][3]. (That analysis is based on the [Poisson distribution][4], and the ratios in the rule for (N) balls and (N) buckets can be calculated from a Poisson distribution with (\lambda = \frac{N}{N} = 1).)
![25 points uniformly randomly placed on a 5x5 grid, showing spurious clustering. 8 boxes are empty, 10 boxes contain one point and 7 boxes contain two or more points.][5]
Random allocation only balances out when the number of “balls” is much larger than the number of “buckets”, i.e., when averaging over a large number of items, or a long time period. Thats one of the many reasons that engineering solutions that work well for large-scale FAANG companies can be problematic when used by companies that are orders of magnitude smaller.
Proving the third-third-quarter rule is pretty easy if you look at just one bucket. Each of the (N) balls represents a (\frac{1}{N}) chance of adding a ball to the bucket, so the chance of the bucket staying empty is just the (e^{- 1} \approx 36.7%) from the first rule. Linearity of expectation means we can combine the results for each bucket and say that 36.7% of _all_ buckets are expected to be empty, even though the bucket counts arent independent. Also, there are (N) possible ways of exactly one ball landing in the bucket, and each way requires one ball to fall in (with probability (\frac{1}{N})) and the other (N - 1) balls to miss (with probability (1 - \frac{1}{N})). So the probably of exactly one ball falling in is (\left. N \times \frac{1}{N} \times (1 - \frac{1}{N})^{N - 1}\rightarrow e^{- 1} \right.).
### Fixed points of random permutations
I dont think this rule has as many practical applications as the (N) balls/buckets rule, but its kind of a freebie.
Think of a battle game in which 6 players start from 6 spawn/home points. If the players play a second round, whats the chance that someone starts from the same point? Mathematically, thats asking about the chance of a random permutation having a “fixed point”.
> If a group of things are randomly shuffled, a bit of over a third ((e^{- 1})) of the time therell be no fixed points, a bit of over a third ((e^{- 1})) of the time therell be just one fixed point, and the remaining quarter or so of the time therell be two or more.
The number of fixed points in a random shuffle happens to approximate the same distribution as the number of balls in the buckets before, which can be [proven from first principles using the inclusion-exclusion principle][6]. But theres an even simpler proof for a related fact:
> A random permutation has exactly one fixed point on average, regardless of size.
If there are (N) things, each one has a (\frac{1}{N}) chance of ending up in its original location after the shuffle, so on average therell be (N \times \frac{1}{N} = 1) fixed points. Note that its impossible to get exactly one fixed point by shuffling a two element set (try it!) but 1 is still the average of 2 and 0. (“Average” doesnt always mean what we want it to mean.)
That proof might seem too simple, but its a demonstration of how powerful linearity of expectation is. Trying to calculate statistics for permutations can be tricky because the places any item can go depend on the places all the other items have gone. Linearity of expectation means we dont have to care about all the interactions as long as we only need to know the average. The average isnt always the most useful statistic to calculate, but its often the easiest by far.
## The coupon collectors problem
Lets look at [the common “loot box” mechanism][7]. Specifically, suppose there are 10 collector items (say, one for each hero in a franchise) that are sold in blind boxes. Lets take the fairest case in which there are no rare items and each item has an equal (\frac{1}{10}) chance of being in a given box. How many boxes will a collector buy on average before getting a complete set? This is the called the coupon collectors problem, and for 10 items the answer is about 29.
> The answer to the coupon collectors problem is a bit more than (N\ln N) (add (\frac{N}{2}) for some more accuracy).
((\ln N) is (\log) base (e), or just `log(N)` in most programming languages.)
The coupon collectors problem hints at why the loot box mechanism is so powerful. The (N) balls in (N) buckets rule tells us that the collector will have about two thirds of the items after buying 10 boxes. It feels like the collector is most of the way there, and it would be a shame to give up and let so much progress go to waste, but actually 10 boxes is only about a third of the expected number of boxes needed. Thats a simplistic model, but item rarity, variation of box type and (in computer games) making some items “unlockable” by completing sets of other items (or fulfilling other dependencies) only make it easier to get collectors to buy more than they originally expect.
The (N\ln N) rule is very rough, so heres a plot for comparison:
![Plot of approximations to the coupon collector's problem. N ln N underestimates significantly, but has the right growth rate. N ln N + N/2 still underestimates slightly, but the error is less than 10%. The 1:1 slope N is also included to show that, beyond small values of N, multiple times N purchases are needed to get all items on average.][8]
The exact value is rarely needed, but its useful to know that youll quickly need multiple times (N) trials to get all (N) hits. Any application of the (N) balls/buckets rule naturally extends to a coupon collectors problem (e.g., on average youll need to put over (N\ln N) items into a hash table before all (N) slots are full) but the coupon collectors problem comes up in other places, too. Often its tempting to use randomness to solve a problem statelessly, and then you find yourself doing a coupon collector problem. A cool example is [the FizzleFade effect in the classic 90s first-person shooter Wolfenstein 3D][9]. When the player character died, the screen would fill up with red pixels in what looks like random order. A simple and obvious way to implement that would be to plot red pixels at random coordinates in a loop, but filling the screen that way would be boring. With (320 \times 200 = 64000) pixels, most (~63.2%) of the screen would be filled red after 64000 iterations, but then the player would have to wait over (\ln(64000) \approx 11) times longer than that watching the last patches of screen fade away. The developers of Wolfenstein had to come up with a way to calculate a pseudo-random permutation of pixels on the screen, without explicitly storing the permutation in memory.
Heres a loose explanation of where the (\ln N) factor comes from: We know already that any pixel has approximately (\frac{1}{e}) chance of not being coloured by any batch of (N) pixel plots. So, after a batch of (N) pixel plots, the number of unfilled pixels goes down by a factor of (e) on average. If we assume we can multiply the average because its close enough to the geometric mean, the number of unfilled pixels will drop from (N) to something like (\frac{N}{e^{k}}) after (k) batches. That means the number of batches needed to go from (N) unfilled pixels to 1 is something like (\ln N), from the basic definition of logarithms.
In the computer age its easy to get an answer once we know we have a specific probability problem to solve. But rough rules like the ones in this post are still useful during the design phase, or for getting an intuitive understanding for why a system behaves the way it does.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/01/27/systems_programming_probability.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://theartofmachinery.com/images/systems_programming_probability/one_on_n_tried_n_times.svg
[2]: https://en.wikipedia.org/wiki/Characterizations_of_the_exponential_function#Characterizations
[3]: https://www.cambridge.org/core/journals/journal-of-the-institute-of-actuaries/article/an-application-of-the-poisson-distribution/F75111847FDA534103BD4941BD96A78E
[4]: https://en.wikipedia.org/wiki/Poisson_distribution
[5]: https://theartofmachinery.com/images/systems_programming_probability/london_v1_simulation.svg
[6]: https://golem.ph.utexas.edu/category/2019/11/random_permutations_part_1.html
[7]: https://www.pcgamer.com/au/the-evolution-of-loot-boxes/
[8]: https://theartofmachinery.com/images/systems_programming_probability/coupon_collector.svg
[9]: http://fabiensanglard.net/fizzlefade/index.php

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Communication superstars: A model for understanding your organization's approach to new technologies)
[#]: via: (https://opensource.com/open-organization/20/1/communication-technology-superstars)
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
Communication superstars: A model for understanding your organization's approach to new technologies
======
Adopting new communication technologies can make your organization more
open. But are your people ready?
![Different cell phones][1]
Multiple books in [the _Open Organization_ series][2] discuss the many ways new communication technologies are changing the nature of both work and management. I've seen these changes firsthand during my nearly three decades working for Japanese corporations. Over time, I've been able to classify and characterize some of the impacts these technologies—particularly new telecommunication technologies and social media—are having on daily life in many organizations. And in April 2016, I shared those observations in an article called _[How new communication technologies are affecting peer-to-peer engagement][3]_.
But a lot can change in a little under four years.
[The Open Organization Ambassadors][4] have learned a great deal about the ways open principles are impacting organizational practices. In particular, we've developed an [Open Organization Definition][5] that specifies the five principles that distinguish open organizations from other types of organization—namely, more transparency, more inclusivity, greater adaptability, deeper collaboration and a sense of purpose teams/community. I've also delivered [a presentation on this topic][6] several times since 2016 and learned new insights along the way. So I'd like to update this article with a few comments that reflect those findings. And then, in a follow-up article, I'd like to offer readers some guidelines on how _they_ can determine their organization's level of comfort with communication technology and use it to increase their success relative to industry competitors.
Simply put: New communication technologies are affecting the way peer-to-peer decision-making practices function in organizations today. And that's affecting several important organizational dimensions: peers' transparency with each other when making decisions, the sense of inclusivity between members in decisions-making activities, their adaptability when situations change, their ability to collaborate with more individuals in the decision-making process, and their ability to build teams, groups, and communities to decide how to achieve their goals.
### Four approaches to communication technology
In Japan, I see companies that heavily promote today's communication technologies, as well as some that avoid them. Imagine four types of companies currently making use of today's communication technologies as they compete with other firms. These technologies are key, because they influence the environment in which certain peer-to-peer communities must work. It affects their collaboration and transparency with each other, their inclusivity with members they couldn't consider before because of location, their adaptability in a crisis, and their ability to build a sense of community among their members. This, in turn, affects members' enthusiasm, desire, and engagement—so _investment_ and _utilization_ are critical considerations. In fact, we can actually chart the four types of technology-adopters according to those two variables: investment and utilization.
Some companies are underinvested in new communication technologies, considering their needs and the relatively lower costs of these technologies today. And what they _do_ have, they're not using to capacity. I call these companies communication technology **"slow movers" (low investment/low utilization)**. Others buy whatever is available at any cost, but don't fully put what they've purchased to full use. I call these communication technology **"fashion followers" (high investment/low utilization)**. Still other companies invest in the very minimum amount of communication technology, but what they do have they use to full capacity. I call these communication technology **"conservative investors" (low investment/high utilization)**. Lastly, there are some companies that invest heavily in communication technology and work very hard to put it to full use. I call these communication technology **"communication superstars" (high investment/high utilization)**.
These "communication superstars" have the ideal environment for peer-to-peer, front-line discussions and decision making. They have greater collaboration and transparency with each other. They include members on their teams that they wouldn't (or couldn't) consider before because of location. They are more adaptable in a crisis. And they have the ability to build a stronger sense of community among their members. Unfortunately, in Japan, particularly among smaller companies, I'd say more than 70 percent are "slow movers" or "conservative investors." If companies would pay more attention to investing in communication technology, and simultaneously increase their efforts at training staff to use the technology to its full potential, then peer-to-peer, front-line employees could explode with creativity and better leverage all five of the open organization principles I mentioned above.
New communication technologies are affecting the way peer-to-peer decision-making practices function in organizations today.
These technologies affect four aspects of information today: volume, speed, quality, and distribution.
### Increased capacity for decision-making (volume)
In "communication superstar" environments, communication technologies can actually increase the amount of information that can be made available quickly. Gone are the days in which only researchers or professors have access to in-depth information. Now, front-line people can obtain volumes of information if they know what they're looking for. With more and greater in-depth information in communication superstar company environments, front-line people working there can have more educated discussions, leading to greater inclusivity and collaboration, which can allow them to make the types of decisions that only top management (supported by consultants and researchers) could have made in the past.
### Faster pace of decision-making and execution (speed)
New technologies in these "communication superstar" companies are leading to quicker information acquisition, feedback, and flow between the front-line members in the organizations, even if they are very widely disbursed.
Using the metaphor of adjusting the temperature of water coming out of a faucet, I would describe the effect this way: If you move the handle but the temperature changes very slowly, then finding the temperature you want becomes difficult, because the pace of temperature change is very slow, and differences between settings are difficult to determine. But if you move the handle and water temperature change is more immediate, you'll find that getting the correct temperature is much easier; you're moving quicker and making more rapid adjustments.
The same logic applies to peer-to-peer discussions and feedback. I have a five-minute-to-twenty-four-hour goal when replying to my worldwide customers. That means that if I receive an email from a customer (something that arrives on my desktop computer at home, my desktop computer in the office, or on my mobile phone), I like to reply within five minutes. This really surprises customers, as they're probably still sitting in front of their computer! In the worst case, I try to reply within 24 hours. This gives me a competitive advantage when attempting to get customers to work with me. Front-line, peer-to-peer communities in these "communication superstar" companies can have that same competitive advantage in making quality decisions and executing them faster. The capacity for speedier replies allows us to make more adjustments quicker. It keeps both employees and customers involved, motivated and engaged. They become more transparent with information, include members they hadn't considered before. They can adapt more rapidly when redirection is required. They can collaborate at a more in-depth level and can build tighter, more trusting project communities. Information arriving too slowly can cause people to "turn off" and direct their attention elsewhere. This weakens the passion, dedication, and engagement of the project.
### Toward wiser decisions (quality)
Information not only travels more quickly when the business communication channels are adequate, but it's also subjected to more scrutiny through greater group collaboration and inclusivity. People can share second opinions and gather additional empirical data using these technologies. Furthermore, new communication technologies allow employees and managers to deliver data in new ways. With my years in sales training around the world, I've learned that using multiple visual aids, infographics, and so forth have greatly enhanced communication when English language barriers could have impeded it. All this can lead to high levels of peer-to-peer, front-line engagement, as up-to-date status reports can be quickly distributed and easily understood, making everyone more responsive.
New technologies in these "communication superstar" companies are leading to quicker information acquisition, feedback, and flow between the front-line members in the organizations, even if they are very widely disbursed.
### Maximal reach (distribution)
Not long ago, teammates had to be physically close to one another and know each other well in order to communicate successfully. That's no longer the case, as communication channels can be developed with people literally all over the world. This has led to greater global inclusivity and collaboration. Good communication is the outcome of developing a trusting relationship. For me, building trust with people I've never met face-to-face has taken a bit longer, but I've done it with modern technology. Developing trust this way has led to great peer-to-peer transparency.
Let me explain. Good communication starts with initial contact, whether meeting someone in person or virtually (via social media or some telecommunication format). Over some period of time and through several exchanges, a relationship starts to develop, and a level of trust is reached. People evaluate one another's character and integrity, and they also judge each other's competence and skills. With this deepening of trust over time, greater communication and collaboration can evolve. At that point, open and in-depth discussions and transparency on very difficult, complex, and sometimes uncomfortable topics can take place. With the ability to communicate at that level, peer-to-peer discussions and decisions can be made. With today's communication technology, greater information exchange can be made among a group of widely disbursed members leading to an expanded team community. I currently have approximately 20 customers around the world. Some I have never met in person; most I have just met in person once. Being stationed in Japan can make regular get-togethers with Europeans and Americans rather difficult. Fortunately, with today's communication technology, I can find solutions for many problems without physically getting together, as I have built a trusting relationship with them.
### Concluding comments
With all the benefits of this "communication superstar" working environment, in open organizations that promote peer-to-peer discussions, decision-making and management, I recommend the other three groups to move in that direction. The "slow movers" more than likely have managerial barriers to open information exchange. They should be convinced of the benefits of a more opened organization and the value of greater information exchange. If they don't improve their communication environment, they may lose their competitive advantage. The "fashion followers" should more carefully study their communication needs and time their investments with their in-company training capacities. The "conservative investors" should study their communication bottlenecks and find the technologies that are available to eliminate them. That's the path to super-stardom.
As I mentioned at the beginning of this article, it's important to determine exactly which of these categories a company falls into with regard to communication technology ["slow movers" (low investment/low utilization), "fashion followers" (high investment/low utilization), "conservative investors" (low investment/high utilization) or "communication superstars" (high investment/high utilization)] against their competitors. Therefore, I would like to address that issue in a future article.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/1/communication-technology-superstars
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mobilemashup3.png?itok=v1gVY8RJ (Different cell phones)
[2]: https://opensource.com/open-organization/resources/book-series
[3]: https://opensource.com/open-organization/16/4/how-new-communication-technologies-are-affecting-peer-peer-engagement
[4]: https://opensource.com/open-organization/resources/meet-ambassadors
[5]: https://opensource.com/open-organization/resources/open-org-definition
[6]: https://www.slideshare.net/RonMcFarland1/competitive-advantage-through-digital-investment-utilization?qid=9fbb4c4b-f2c2-4468-9f0a-3ebaa6efc91d&v=&b=&from_search=1

View File

@ -1,66 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (You can now have a Mac Pro in your data center)
[#]: via: (https://www.networkworld.com/article/3516490/you-can-now-have-a-mac-pro-in-your-data-center.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
You can now have a Mac Pro in your data center
======
The company that once eschewed the enterprise now has a server version of the Mac Pro. Apple's rack-mountable Mac Pro starts at $6,499.
Apple
Steve Jobs rather famously said he hated the enterprise because the people who use the product have no say in its purchase. Well, Apple's current management has adopted the enterprise, ever so slowly, and is now shipping its first server in years. Sort of.
Apple introduced a new version of the Mac Pro in December 2019, after a six-year gap in releases, and said it would make the computer rack-mountable for data centers. But at the time, all the attention was on the computers aesthetics, because it looked like a cheese grater. The other bit of focus was on the price; a fully decked Mac Pro cost an astronomical $53,799. Granted, that did include specs like 1.5TB of DRAM and 8TB of SSD storage. Those are impressive specs for a server, although the price is still a little crazy.
Earlier this month, Apple quietly delivered on the promise to make the Mac Pro rack-mountable. The Mac Pro rack configuration comes with a $500 premium over the cost of the standing tower, which means it starts at $6,499.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
That gives you an 8-core Intel Xeon W CPU, 32GB of memory, a Radeon Pro 580X GPU, and 256GB of SSD storage. Most importantly, it gives you the rack mounting rails (which ship in a separate box for some reason) needed to install it in a cabinet. Once installed, the Mac Pro is roughly the size of a 4U server.
Mac Pros are primarily used in production facilities, where they are used with other audio and video production hardware. MacStadium, a Mac developer with its own data centers, has been installing and testing the servers and thus far has had high praise for both the [ease of install][2] and [performance][3].
The server-ready version features a slight difference in its case, according to people who have tested it. The twist handle on the Mac Pro case is replaced with two lock switches that allow the case to be removed to access the internal components. It comes with two Thunderbolt 3 ports and a power button.
The Mac Pro may be expensive, but you get a lot of performance for your money. Popular YouTube Mac enthusiast Marques Brownlee [tested it out][4] on a 8k resolution video encoding job. Brownlee found a MacBook Pro took 20 minutes to render the five-minute-long video, a iMac Pro desktop took 12 minutes, and the Mac Pro processed the video in 4:20. So the Mac Pro encoded 8k resolution video faster than real time.
**[ Learn [how server disaggregation can boost data center efficiency][5] and [how Windows Server 2019 embraces hyperconverged data centers][6] . | Get regularly scheduled insights by [signing up for Network World newsletters][7]. ]**
Apples last server was the Xserve, killed off in 2010 after several years of neglect. Instead, it made a version of MacOS for the whole Mac line that would let the hardware be run as a server, which is exactly what the new rack-mountable version of the Mac Pro is.
MacStadium is doing benchmarks like Node.js, a JavaScript runtime. It will be interesting to see if anyone outside of audio/video encoding uses a Mac Pro in their data centers.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516490/you-can-now-have-a-mac-pro-in-your-data-center.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://twitter.com/brianstucki/status/1219299028791226368
[3]: https://blog.macstadium.com/blog/2019-mac-pros-at-macstadium
[4]: https://www.youtube.com/watch?v=DOPswcaSsu8&t=
[5]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
[6]: https://www.networkworld.com/article/3263718/software/windows-server-2019-embraces-hybrid-cloud-hyperconverged-data-centers-linux.html
[7]: https://www.networkworld.com/newsletters/signup.html
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,64 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel denies reports of Xeon shortage)
[#]: via: (https://www.networkworld.com/article/3516392/intel-denies-reports-of-xeon-shortage.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel denies reports of Xeon shortage
======
The PC side of Intel's Xeon processor supply remains constrained, but server customers should get their orders this year.
Intel
Intel has denied reports that its Xeon supply chain is suffering the same constraints as its PC desktop/laptop business. CEO Bob Swan said during the company's recent earnings call that its inventory was depleted but customers are getting orders.
The issue blew up last week when HPE  one of Intel's largest server OEM partners  reportedly [told UK-based publication The Register][1] that there were supply constraints with Cascade Lake processors, the most recent generation of Xeon Scalable processors, and urged HPE customers "to consider alternative processors." HPE did not clarify if it meant Xeon processors other than Cascade Lake or AMD Epyc processors.
AMD must have loved that.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
At the time, Intel was in the quiet period prior to announcing fourth quarter 2019 results, so when I initially approached them for comment, company executives could not answer. But on last weeks earnings call, Swan set the record straight. While supply of desktop CPUs remains constrained, especially on the low-end, Xeon supply is in “pretty good shape,” as he put it, even after a 19% growth in demand for the quarter.
“When you have that kind of spike in demand, we are not perfect across all products or all SKUs. But server CPUs, we really prioritize that and try to put ourselves in a position where we are not constrained, and we are in pretty good shape. Pretty great shape, macro. Micro, a few challenges here and there. But server CPU supply is pretty good,” he [said on an earnings call][3] with Wall Street analysts.
Intel CFO George Davis added that supply is expected to improve in the second half of this year, across the board, thanks to an expansion of production capacity. "In the second half of the year we would expect to be able to bring both our server products and, most importantly, our PC products back to a more normalized inventory level," Davis said.
Intels data center group had record revenue of $7.2 billion in Q4 2019, up 19% from Q4 2018. In particular, cloud revenue was up 48% year-over-year as cloud service providers continue building out crazy levels of capacity.
**[ Check out our [12 most powerful hyperconverged infrasctructure vendors][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
Hyperscalers like Amazon and Google are building data centers the size of football stadiums and filling them with tens of thousands of servers at a time. Ive heard concerns about this trend of a half-dozen or so companies hoovering up all of the supply of CPUs, memory, flash and traditional disk, and so on, but so far any real shortages have not come to pass.
Perhaps not surprisingly, Intel's enterprise and government revenue was down 7% as more and more companies reduce their data center footprint, while communication and service providers' revenue grew 14% as customers continue to adopt AI-based solutions to transform their networks and transition to 5G.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516392/intel-denies-reports-of-xeon-shortage.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.theregister.co.uk/2020/01/20/intel_hpe_xeon_shortage/
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://seekingalpha.com/article/4318803-intel-corporation-intc-ceo-bob-swan-on-q4-2019-results-earnings-call-transcript?part=single
[4]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How bacteria could run the Internet of Things)
[#]: via: (https://www.networkworld.com/article/3518413/how-bacteria-could-run-the-internet-of-things.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
How bacteria could run the Internet of Things
======
The Internet of Bio-Nano Things (IoBNT) would use certain kinds of bacteria, which scientists think has the attributes needed to make effective sensor networks.
Thinkstock
Biologically created computing devices could one day be as commonplace as todays microprocessors and microchips, some scientists believe. Consider DNA, the carrier of genetic information and the principal component of chromosomes; it's showing promise [as a data storage medium][1].
A recent study ([PDF][2]) suggests taking matters further and using microbes to network and communicate at nanoscale. The potential is highly attractive for the Internet of Things (IoT), where concealability and unobtrusiveness may be needed for the technology to become completely ubiquitous.
Advantages to an organic version of IoT include not only the tiny size but also the autonomous nature of bacteria, which includes inherent propulsion. Theres “an embedded, natural propeller motor,” the scientists from Queen Mary University in London explain of the swimming functions microbes perform.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
At this point, research into the Internet of Bio-Nano Things (IoBNT) is at an early stage, and the Queen Mary University researchers are predominantly explaining how similarities between bacteria and computing could be exploited. But the study is intriguing.
"The microbes share similarities with components of typical computer IoT devices," wrote Raphael Kim and Stefan Posland in their [paper][2] published on the subject. “This presents a strong argument for bacteria to be considered as a living form of Internet of Things (IoT) device.”
Environmental IoT is one area they say could benefit. In smart cities, for example, bacteria could be programmed to sense for pollutants. Microbes have good chemical-sensing functions and could turn out to work better than electronic sensors. In fact, the authors say that microbes share some of the same sensing, actuating, communicating and processing abilities that the computerized IoT has.
In the case of sensing and actuating, bacteria can detect chemicals, electromagnetic fields, light, mechanical stress and temperature — just whats required in a traditional printed circuit board-based sensor. Plus, the microbes respond. They can produce colored proteins, for example. And not only that, they respond in a more nuanced way compared to the chip-based sensors. They can be more sensitive, as one example.
[The time of 5G is almost here][4]
The aforementioned DNA, built into bacteria, functions as a control unit, both for processing and storing data. Genomic DNA would contain the instructions for some functioning, and plasmids — which is another form of DNA related to how genes get into organisms — customize process functions through gene addition and subtraction.
Networking is also addressed. Transceivers are also in bacterial IoT, the team says. The importing and exporting of molecules act as a form of signaling pathway, and a DNA exchange between two cells can take place. Thats called “molecular communication” and is described as a bacterial nanonetwork. Digital-to-DNA and back to DNA again is a DNA-related area currently showing promise.
Bacteria should become a “substrate to build a biological version of the Internet of Things,” the scientists say. Interestingly, similar to how traditional IoT has been propelled forward by tech hobbyists mucking around with Arduino microcontrollers and Raspberry Pi educational mini-computers, Kim and Posland reckon it will be do-it-yourself biology that will kick-start IoBNT. They point out that easily obtainable educational products like [the Amino Labs kit][5] already allow the generation of specific colors from bacteria, for example.
“Currently, tools and techniques to run small-scale experiments with micro-organisms are widely available to the general public, through various channels, including maker spaces.”
The team also suggest that hypothetically the “gamification of bacteria” could become a part of the experimentation. Biotic games exist. The researchers propose “to utilize the DIY biology movement and gamification techniques to leverage user engagement and introduction to bacteria.”
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518413/how-bacteria-could-run-the-internet-of-things.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3268646/dna-data-storage-closer-to-becoming-reality.html
[2]: https://arxiv.org/ftp/arxiv/papers/1910/1910.01974.pdf
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[5]: https://amino.bio/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,62 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IBM's CEO Virginia Rometty to be replaced by its cloud, Red Hat chiefs)
[#]: via: (https://www.networkworld.com/article/3518795/ibms-ceo-virginia-rometty-to-be-replaced-by-its-cloud-red-hat-chiefs.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
IBM's CEO Virginia Rometty to be replaced by its cloud, Red Hat chiefs
======
IBM cloud leader Arvind Krishna and Red Hat CEO Jim Whitehurst, to take reins from long-time CEO
IBM
If anyone was still wondering how serious IBM is about being a major cloud player that question was resoundly answered this week when its current cloud and cognitive-software leader Arvind Krishna and Red Hat CEO Jim Whitehurst to be CEO and president, respectively, to replace long-time CEO Virginia Rometty.
Krishna, 57, was a principal architect of IBMs $34 billion acquisition of Red Hat last year and is currently IBMs senior vice president of Cloud and Cognitive Software, which has become the companys palpable future.   
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
The [Red Hat acquisition][2] not only made Big Blue a bigger open-source and enterprise-software player, but mostly it got IBM into the lucrative hybrid-cloud business, targeting huge cloud competitor Google, Amazon and Microsoft among others. Gartner says that market will be worth $240 billion by next year.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
In its most recent financial call IBM talked-up the successes of its cloud and Red Hat growth. For example its total cloud revenue of $6.8 billion was up 21% year over year, and Red Hats normalized revenue was up 24%, eclipsing $1 billion in a quarter for the first time, IBM stated.
“The next chapter of cloud will be driven by mission-critical workloads managed in a hybrid, multi-cloud environment. This will be based on a foundation of Linux, with containers and Kubernetes. This quarter we had strong performance in RHEL and OpenShift,”  said Jim Kavanaugh IBM senior vice president and chief financial officer (a full transcript of that financial call is available from Seeking Alpha [here][4].)  “As we look forward, the largest hybrid-cloud opportunity is in services, advising clients on architectural choices, moving workloads, building new applications and of course managing them.”
In announcing the leadershiop transition, which will occur April 6, [Rometty wrote of Krishna][5]: "He is a brilliant technologist who has played a significant role in developing our key technologies such as artificial intelligence, cloud, quantum computing and blockchain.  He is also a superb operational leader, able to win today while building the business of tomorrow."
Under Rometty, who was named CEO in 2012, IBM has acquired 65 companies, reinvented more than 50% of IBM's portfolio, built a $21 billion hybrid-cloud business and established IBM's position in AI, quantum computing and blockchain, IBM stated. Rometty will remain as executive chairman until the end of 2020 and then retire after some 40 years at the company. 
Meanwhile, Rometty had this to say about Whitehurst, 52, who is currently IBM senior vice president and CEO of Red Hat: "Jim is also a seasoned leader who has positioned Red Hat as the world's leading provider of open-source enterprise IT software solutions and services, and has been quickly expanding the reach and benefit of that technology to an even wider audience as part of IBM.”
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518795/ibms-ceo-virginia-rometty-to-be-replaced-by-its-cloud-red-hat-chiefs.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3317517/the-ibm-red-hat-deal-what-it-means-for-enterprises.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://seekingalpha.com/article/4318204-international-business-machines-corporation-ibm-q4-2019-results-earnings-call-transcript
[5]: https://newsroom.ibm.com/2020-01-30-Arvind-Krishna-Elected-IBM-Chief-Executive-Officer
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,80 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 open governance questions every project needs to answer)
[#]: via: (https://opensource.com/article/20/2/open-source-projects-governance)
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
6 open governance questions every project needs to answer
======
Open governance insights from Chris Aniszczyk, VP of Developer Relations
at the Linux Foundation.
![Two government buildings][1]
When we think about what needs to be in place for an open source project to function, one of the first things to come to mind is probably a license. For one thing, absent an approved [Open Source Initiative (OSI) license][2], a project isnt truly open source in the minds of many. Furthermore, the choice to use a copyleft license like the GNU General Public License (GPL) or a permissive license like Massachusetts Institute of Technology (MIT) can affect the sort of community that grows up around and uses the project.
However, Chris Aniszczyk, VP of Developer Relations at the Linux Foundation, argues that its equally important to consider the **open governance of a project** because the license itself doesnt actually tell you how the project is governed.
These are some of the questions that Aniszczyk argues need be answered. He adds that answering these questions before disputes arise, and answering them in a way thats viewed as open and fair to all participants leads to projects that tend to be more successful long term, especially as they grow in size.
### 6 open governance questions for every project
1. Who makes the decisions?
2. How are maintainers added?
3. Who owns the rights to the domain?
4. Who owns the rights to the trademarks?
5. How are those things governed?
6. Who owns how the build system works?
However, while all of these questions should be considered, there isnt one correct way of answering them. Different projects—and foundations hosting projects—take different approaches, whether to accommodate the requirements of a particular community or just for historical reasons.
The latter is often the case when a project uses something often called the Benevolent Dictator for Life (BDFL) model, in which one person—usually the project's founder—generally has the final say on major project decisions. Many projects end up here by default—perhaps most notably the Linux kernel. However, Red Hats Joe Brockmeier observed to me that its mostly considered an anti-pattern at this point. "While a few BDFL-driven projects have succeeded to do well, others have stumbled with that approach," he says.
Aniszczyk observes that "foundations have different sets of bylaws, charters, and how theyre structured, and there are fascinating differences between these organizations. Like Apache is very famous for the Apache Way, and thats how they expect projects to operate. They very much have guardrails about how releases are done. [Its] kind of an incubator process where every project starts way before it graduates to a top-level project. In terms of how projects are governed, its almost like an infinite amount of approaches," he concludes.
### Minimum requirements
That said, Aniszczyk lists some minimum requirements.
"Our pattern, at least, in many Linux Foundation and Cloud Native Computing Foundation (CNCF) projects, is a _governance.md_ file, which describes how decisions are made, how things are governed, how maintainers are added, removed, how are sub-projects added, removed, etc., how releases are done. That would be step one," he says.
#### Ownership
Secondly, he doesnt "think you could do open governance without assets being neutrally owned. At the end of the day, someone owns the domain, the rights to the trademark, some of the copyright, potentially. There are many great organizations out there that are super lightweight. There are things like the Apache Foundation, Software in the Public Interest, and the Software Freedom Conservancy."
Aniszczyk also sees some common approaches as at least potential anti-patterns. A key example is contributor license agreements (CLA), which define the terms under which intellectual property, like code, is contributed to a project. He says that if a company wants "to build a product or use a dual license type model, thats a very valid reason for a CLA. Otherwise, I view CLA as a high friction tool for developers."
#### Developer Certificate of Origin
Instead, he generally encourages people to "use what we call the 'Developer Certificate of Origin.' Its how the Linux kernel works, where basically it takes all the basic things that most CLAs do, which would be like, Did I write this code? Did I not copy it elsewhere? Do I have the rights to give this to you, and you sign off on? Its been a very successful model played out in the kernel and many other ecosystems. Im generally not really supportive of having CLAs unless theres a real strict business need."
#### Naming a project
He also sees a lot of what he considers mistakes in naming. "Project branding is super important. Theres a common pattern where people will start a project, it could be within a company or yourself, or you have a startup, and youll call it, lets say, 'Docker.' Then you have Docker the project, and you have Docker, the company. Then you also have Docker the product or Docker the enterprise product. All those things serve different audiences. It leads to confusion because I have an inherent belief that the name of something has a value proposition attached to it. Please name your company separate from your project, from your product," he argues.
#### Trust
Finally, Aniszczyk points to the role of open governance in building trust and confidence that a company cant just take a project unilaterally for its own ends. "Trust is table stakes in order to build strong communities because, without openly governed institutions in projects, trust is very hard to come by," he concludes.
_List to the Innovate @Open podcast episode from which Chris Aniszczyks remarks were drawn can be heard [here][3]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-projects-governance
作者:[Gordon Haff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ghaff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov2.png?itok=n36__lZj (Two government buildings)
[2]: https://opensource.org/licenses
[3]: https://grhpodcasts.s3.amazonaws.com/cra1911.mp3

View File

@ -1,127 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bulletin Board Systems: The VICE Exposé)
[#]: via: (https://twobithistory.org/2020/02/02/bbs.html)
[#]: author: (Two-Bit History https://twobithistory.org)
Bulletin Board Systems: The VICE Exposé
======
By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.
We have reported before on the dark webs [“hurtcore” communities][1], its [human trafficking markets][2], its [rent-a-hitman websites][3]. We have explored [the challenges the dark web presents to regulators][4], the rise of [dark web revenge porn][5], and the frightening size of [the dark web gun trade][6]. We have kept you informed about that one dark web forum where you can make like Walter White and [learn how to manufacture your own drugs][7], and also about—thanks to our foreign correspondent—[the Chinese dark web][8]. We have even attempted to [catalog every single location on the dark web][9]. Our coverage of the dark web has been nothing if not comprehensive.
But I wanted to go deeper.
We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.
A month ago, I set out to find it. Unsure where to start, I made a post on _Reddit_, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.
Only minutes after I made my post, I received a private message. “If you want to see it, Ill take you there,” wrote _Reddit_ user FingerMyKumquat. “But Ill warn you just once—its not pretty to see.”
### Getting Access
This would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.
But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of “bulletin board systems,” a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.
To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its _phone number_.
The software I needed was called [SyncTerm][10]. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.
When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.
_More than enough for what,_ I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?
![][11] _The vivid blue interface of SyncTerm. My directory of BBSes on the left._
### Heatwave
I decided first to visit the bulletin board system called “Heatwave,” which I imagined must be a hangout for global warming survivalists. I “dialed” in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered “DonPablo,” and “z3r0day,” but finally chose “ripper”—a name I could remember because it is also the name of my great-aunt Merediths Shih Tzu. I was then asked where I was dialing from; I decided “xxx” was the right amount of enigmatic.
And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.
![][12] _The main menu of the Heatwave BBS._
I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the “Heatwave” today? There was a main menu option that read “(L)ast Few Callers,” so I hit “L” on my keyboard.
My screen slowly filled with a large table, listing all of the systems “callers” over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a “Dan,” calling from St. Louis, MO. There was also a “Greg Miller,” calling from Portland, OR. Another caller claimed he was “George” calling from Campellsburg, KY. Most of the entries were like that.
It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They werent fooling me.
I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit “M” for “(M)essage Areas.”
Here, I was presented with a choice. I could enter the area reserved for discussions about “T-99 and Geneve,” which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about “Other,” which seemed like a safe place to start.
The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user “Kevin” was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?
I googled it. Remicade is used to treat rheumatoid arthritis and Crohns disease.
In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. “Finger,” I messaged him, “What is this shit Im looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!”
“Perhaps youre ready for the SpookNet,” he wrote back.
### SpookNet
Each bulletin board system is an island in the television-static ocean of the digital world. Each systems callers are lonely sailors come into port after many a month plying the seas.
But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.
One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial “Reality Check.”
![][13] _The Reality Check BBS._
Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the _X-Files_ pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to “End Times and the Last Days.” There was a board for discussing “Truth, Polygraphs, and Serums,” and another for discussing “Silencers of Information.” Here, surely, I would find something worth writing about in an article for VICE.
I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that “paper mill” is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell “explosive” or “sensitive” documents—as in the sentence, offered as an example by one SpookNet user, “Damn, here comes that paper mill Juan again.” I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.
“These are just a bunch of normal losers,” I finally messaged my guide. “Mostly they complain about anti-vaxxers and verses from the Quran. This is just _Reddit_!”
“Huh,” he replied. “When you said scum of the earth, did you mean something else?”
I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their “warez” for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.
I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called “GREY.”
I downloaded it. It was a complete PDF copy of E. L. James _50 Shades of Grey_.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!<https://t.co/VNwT8wgH8j>
>
> — TwoBitHistory (@TwoBitHistory) [January 5, 2020][16]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2020/02/02/bbs.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.vice.com/en_us/article/mbxqqy/a-journey-into-the-worst-corners-of-the-dark-web
[2]: https://www.vice.com/en_us/article/vvbazy/my-brief-encounter-with-a-dark-web-human-trafficking-site
[3]: https://www.vice.com/en_us/article/3d434v/a-fake-dark-web-hitman-site-is-linked-to-a-real-murder
[4]: https://www.vice.com/en_us/article/ezv85m/problem-the-government-still-doesnt-understand-the-dark-web
[5]: https://www.vice.com/en_us/article/53988z/revenge-porn-returns-to-the-dark-web
[6]: https://www.vice.com/en_us/article/j5qnbg/dark-web-gun-trade-study-rand
[7]: https://www.vice.com/en_ca/article/wj374q/inside-the-dark-web-forum-that-tells-you-how-to-make-drugs
[8]: https://www.vice.com/en_us/article/4x38ed/the-chinese-deep-web-takes-a-darker-turn
[9]: https://www.vice.com/en_us/article/vv57n8/here-is-a-list-of-every-single-possible-dark-web-site
[10]: http://syncterm.bbsdev.net/
[11]: https://twobithistory.org/images/sync.png
[12]: https://twobithistory.org/images/heatwave-main-menu.png
[13]: https://twobithistory.org/images/reality.png
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1213920921251131394?ref_src=twsrc%5Etfw

View File

@ -1,66 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Private equity firms are gobbling up data centers)
[#]: via: (https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Private equity firms are gobbling up data centers
======
Private equity firms accounted for 80% of all data-center acquisitions in 2019. Is that a good thing?
scanrail / Getty Images
Merger and acquisition activity surrounding [data-center][1] facilities is starting to resemble the Oklahoma Land Rush, and private-equity firms are taking most of the action.
New research from Synergy Research Group saw more than 100 deals in 2019, a 50% growth over 2018, and private-equity companies accounted for 80% of them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
M&amp;A activity broke the 100 transaction mark for the first time in 2019, and that comes despite a 45% decline in public company activity, such as the massive Digital Reality Trust [purchase][3] of Interxion. At the same time, the size of the deals dropped in 2019, with fewer worth $1 billion or more vs. 2018, and the average deal value fell 24% vs. 2018.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Since 2015, there have been approximately 350 data-center deals, both public and private, with a total value of $75 billion, according to Synergy. Over this period, private equity buyers have accounted for 57% of the deal volume. Deals were roughly a 50-50 split until 2018 when public company purchases began to trail off.
Anecdotally, Ive heard one reason for the decline in big deals is there are no more big purchases to be had, at least in the US. DRT/Interxion is an exception, and Interxion is a foreign company. Other big deals, like Equinix purchasing Verizons data centers for $3.6 billion in 2017 or AT&amp;T selling its data centers to private equity company Brookfield in 2019. There just isnt much left to sell.
The question becomes is this necessarily a good thing? Private equity firms have something of a well-earned bad reputation for buying up companies, sucking all the profit out of them and discarding the empty husk.
But John Dinsdale, chief analyst for Synergy, said not to worry, that the private equity firms grabbing data centers are looking to grow them. “This is a heavily infrastructure-oriented business where what you can take out is pretty directly related to what you put in. A lot of these equity investors are looking to build something rather than quickly flipping the assets,” he said via e-mail.
He added “In these types of business there isnt that much manpower, HQ or overhead there to be stripped out.” Which is true. Data centers are pretty low-staffed. It was a national news item several years ago that Apples $1 billion data center in rural North Carolina would only [create 50 jobs][5]. Thats true for most data centers.
At least one big player, Digital Realty Trust, was formed in 2004 after private-equity firm GI Partners bought out 21 data centers from a bankruptcy. DRT has grown to 214 centers in the U.S. and Europe.
So in this case, a private equity firm buying out your data center provider might prove to be a good thing.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3451437/digital-realty-acquisition-of-interxion-reshapes-data-center-landscape.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.cultofmac.com/132012/despite-huge-unemployment-rate-apples-1-billion-data-super-center-only-created-50-new-jobs/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,92 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Stuck in a loop: 4 signs anxiety may be affecting your work)
[#]: via: (https://opensource.com/open-organization/20/2/working-anxiety-inaction-loop)
[#]: author: (Sam Knuth https://opensource.com/users/samfw)
Stuck in a loop: 4 signs anxiety may be affecting your work
======
Gathering feedback is an everyday practice in open organizations. Are
you collecting data to improve your products—or to alleviate your own
apprehensions?
![arrows cycle symbol for failing faster][1]
_Editor's note: This article is part of a series on working with mental health conditions. It details the author's personal experiences and is not meant to convey professional medical advice or guidance._
A few months ago, I was chatting with one of our VPs about my new role and some of the work I was hoping to do with my team. I'd decided that one of my first actions in the new position would be to interview members of the senior leadership team to get their input on strategy. I'd be leading an entirely new function for my department, and I felt it would be good to get input from a wide array of stakeholders before jumping to action. This is part of my standard practice working in an open organization.
I had several goals for these one-on-one conversations. I wanted to be transparent in my work. I wanted to validate some of my hypotheses. I wanted to confirm that what I wanted to do would be valuable to other leaders. And I wanted to get some assurance that I was on the right track.
Or so I thought.
"Hmmm," said the VP after I had shared my initial ideas. He hesitated. "It's very broad." More hesitation. I'm not sure what I had expected him to say, but this was definitely not the kind of assurance I was hoping for. "You need to be careful about tilting at windmills." I didn't know what "[tilting at windmills][2]" meant, but it sounded like a good thing to avoid doing.
After having several more of these conversations over the course of a few weeks—many of them lively and fruitful—I came to one clear conclusion: Although I was getting lots of great input, I wasn't going to find any kind of consensus about priorities among the leadership team.
So why was I asking?
Eventually I realized what was _really_ underlying my desire to seek input: not just a desire to learn from the people I was interviewing, but also a nagging question in my gut. "Am I doing the right thing?"
One manifestation of anxiety is a worry that we're doing something wrong, which is also related to [imposter syndrome][3] (worry that we're going to be "found out" as unqualified for or incapable of the work or the role we've been given).
I've [previously described][4] a positive "anxiety performance loop" that can drive high performance. I can occasionally fall into another kind of anxiety loop, an "inaction loop," which can _lower_ performance. Figure 1 (below) illustrates it.
![][5]
One challenge of this manifestation of anxiety is that it creeps up on me; I don't consciously realize that I'm stuck in it until something happens that makes it apparent.
In this case, that "something" was my coach.
My desire to get input from a large variety of stakeholders was resulting in so much input that it was preventing me from moving forward.
During a session when my coach was asking me questions about my work, I came to the realization that I was overly worried about whether I was on the right track. My desire to get input from a large variety of stakeholders (a legitimate thing to do) was resulting in so much input that it was preventing me from moving forward.
If I hadn't been fortunate enough to be working with a coach, I may never have had that realization—or I may have had it through a much harder experience. At some point, anxiety about whether you are doing the right thing could lead to failure, not because you did the wrong thing but because you didn't do anything at all.
I've found a few signs to help me realize if I'm in an anxiety inaction loop. I may be in one of these loops if:
* I talk about what I'm _planning_ to do rather than what I _am_ doing
* I feel that I need just _one more person's_ opinion, or I just need to check in with my boss _one more time_, before moving ahead
* I am revising the same presentation repeatedly but never actually giving the presentation to anyone
* I am avoiding or delaying something (such as giving a talk, or making a decision)
Having tools for self-reflection is critical. The reality is that most of the time I'm not working with a coach, and I need to be able to recognize these symptoms on my own. That will only happen if I set aside time to reflect on how things are going and to ask myself hard questions about whether I am stuck in any of these stalling patterns. I've started to build this time into my calendar, usually on Friday afternoons, or early in the morning before my meetings start.
The fact that my anxiety can manifest both as dual worries—that I am not doing the right thing and that I am not doing enough—can be paradoxical.
Recognizing the anxiety loop is the first step. To get out of it, I've developed a few techniques like:
* Setting achievement milestones in 90 day increments, reviewing them on a weekly basis, and using this as an opportunity to reflect on progress.
* Reminding myself that (in fact) I might _not_ be doing the right thing. If that's the case, I'll get feedback, correct, and keep going (it won't be the end of the world).
* Reminding myself that I am in this job for a reason; people want me to do the job. They don't want me to ask them what to do or wait for them to tell me what to do.
* When seeking input from others, saying "This is what I am planning on doing" rather than "What do you think of this?" then either hearing objections if they arise or moving ahead if not.
The fact that my anxiety can manifest both as dual worries—that I am not doing the right thing _and_ that I am not doing enough—can be paradoxical. Over-correcting to get out of an anxiety inaction loop could put me right into [an anxiety performance loop][4]. Neither situation feels like a healthy one.
As with most things, the answer is balance and moderation. Finding that balance is precisely the challenge anxiety creates. In some cases I may be worried I'm not doing enough; in others I may be worried that what I'm doing isn't right, which leads me to slow down. The best approach I have found so far is awareness—taking the time to reflect and trying to correct if I'm going too far in either direction.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/working-anxiety-inaction-loop
作者:[Sam Knuth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/samfw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://en.wikipedia.org/wiki/Don_Quixote#Tilting_at_windmills
[3]: https://en.m.wikipedia.org/wiki/Impostor_syndrome
[4]: https://opensource.com/open-organization/20/1/leading-openly-anxiety
[5]: https://opensource.com/sites/default/files/images/open-org/loop_2.png

View File

@ -1,114 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Drupal 8 aims to be future-proof)
[#]: via: (https://opensource.com/article/20/2/drupal-8-promises)
[#]: author: (Shefali Shetty https://opensource.com/users/shefalishetty)
How Drupal 8 aims to be future-proof
======
What you need to know about Drupal 8 updates.
![Drupal logo with gray and blue][1]
Thomas Edison famously said, "The three great essentials to achieve anything worthwhile are, first, hard work; second, stick-to-itiveness; third, common sense." This quote made me wonder if "sticking-to-it" is contradictory to innovation; does it make you resistant to change? But, the more I pondered on it, I realized that innovation is fueled by perseverance.
Before Drupal 8 was introduced, the Core committee had not just promised to innovate; they decided to be persistent. Persistent in continuous reinvention. Persistent in making Drupal easier to adopt—not only by the market but also by developers with various levels of expertise. However, to be able to make Drupal successful and relevant in the long run, a drastic change was needed—a change that would build a better future. For this, Drupal 8 had to dismantle the Drupal 7 architecture and lay a fresh foundation for a promising future. Moving on to Drupal 9 (coming soon) and subsequent versions will now be easy and straightforward.
### Freedom to innovate with open source
Innovation brings freedom, and freedom creates innovation. Open source gives you the freedom to access, learn, contribute, and, most importantly, the freedom to innovate. The ability to learn, catch up, and reinvent is extremely crucial today. Drupal began as a small internal news website and later went on to become an open source content management system (CMS) because there was a potential to make it much more compelling by attracting more contributions. It gave developers the freedom to collaborate, re-use components, and improvise on it to create something more modern, powerful, and relevant.
### Promises delivered: Drupal 8 version history
The web is always changing. To stay relevant, [Drupal][2] had to introduce changes that were revolutionary but, at the same time, not so hard to accept. Drupal 7, as a content management system, was widely welcomed. But it lacked in certain aspects like developer adoptability, easy upgrade paths, better API support, and more. Drupal 8 changed everything. They did not choose to build upon Drupal 7, which would have been an easier choice for an open source CMS. For a more future-proof CMS that is ready to accept changes, Drupal 8 had to be rebuilt with more modern components like Symfony, Twig, PHP 7, and initiatives like the API-first initiative, mobile-first initiative, Configuration Management initiative, etc.
Drupal 8 was released with a promise of providing more ambitious digital experiences with better UX improvements and mobile compatibilities. The goal was to continuously innovate and reinvent itself. For this to work, these practices needed to be put in place: semantic versioning (major.minor.patch), scheduled releases (two minor releases per year), and introducing experimental modules in Core. All of this while providing backward compatibility and removing deprecated code.
Lets look at some of the promises that have been delivered with each minor version of Drupal 8.
* **Drupal 8.0**
* Modern and sophisticated PHP practices, object-oriented programming, and libraries.
* Storage and management of configuration was a bit of a messy affair with Drupal 7. The Configuration Management Initiative was introduced with Drupal 8.0, which allowed for cleaner installations and config management. Configurations are now stored in easily readable YAML format files. These config files can also be readily imported. This allows for smooth and easy transitions to different deployment environments.
* Adding Symfony components drastically improved Drupal 8s flexibility, performance, and robustness. Symfony is an open source PHP framework, and it abides by the MVC (Model-View-Controller) architecture.
* Twig is a powerful template engine for PHP, replaced Drupals engine since 2005, PHPTemplate. With Twig, the code is now more readable, and the theme system is less complex, uses inheritance to avoid redundant code, and offers more security by sanitizing variables and functions.
* The Entity API, which was quite limited and a contributed module in Drupal 7, is now full-fledged and is in Drupal 8 Core. Since Drupal 8 treats everything as an "entity," the Entity API provides a standardized method of working with them.
* The CKEditor, which is a WYSIWYG (What You See Is What You Get) editor, was introduced. It allows for editing on the go, in-context editing, and previewing your changes before it gets published.
* **Drupal 8.1**
* The alpha version of the BigPipe module got introduced to Core as an experimental module. BigPipe renders Drupal 8 pages faster using methods like caching and auto-placeholder-ing.
* A Migrate UI module suite got introduced to Core as an experimental module. It makes migrating from Drupal 7 to Drupal 8 easier.
* The CKEditor now includes spell-check functionality and the ability to add optional languages in text.
* Improved testing infrastructure and support especially for Javascript interactions.
* Composer is an essential tool to manage third-party dependencies of websites and modules. With Drupal 8.1, Drupal Core and all its dependencies are now managed and packaged by Composer.
* **Drupal 8.2**
* The Place Block module is now an experimental module in Core. With this module, you can easily play around with blocks right from the web UI. Configuring and editing blogs can be done effortlessly.
* A new Content Moderation module that is based on the contributed module Workbench Moderation has been introduced as an experimental module in Core. It allows for granular workflow permissions and support.
* Content authoring experiences have been enhanced with better revision history and recovery.
* Improved page caching for 404 responses.
* **Drupal 8.3**
* The BigPipe module is now stable!
* More improvements in the CKEditor. A smooth copy-paste experience from Word, drag and drop images, and an Autogrow plugin that lets you work with bigger screen sizes and more.
* Better admin status reporting for improved administrator experience.
* The Field Layout module was added as an experimental module in Core. This module replaces the Display Suite in Drupal 7 and allows for arranging and assigning layouts to different content types.
* **Drupal 8.4**
* The 8.4 version calls for many stable releases of previously experimental modules.
* Inline Form Errors module, which was introduced in Drupal 8.0, is now stable. With this module, form errors are placed next to the form element in question, and a summary of the errors is provided on the top of the form.
* Another stable release—the DateTime Range module that allows date formats to match that of the Calendar module.
* The Layout Discovery API, which was added as an experimental module in Drupal 8.3, is now stable and ready to roll. With this module, the Layout API is added to Drupal 8 Core. It has adopted the previously popular contributed modules—Panels and Panelizer—that were used extensively to create amazing layouts. Drupal 8s Layout initiative has ensured that you have a powerful Layout building tool right out of the box.
* The very popular Media module is added as an API for developers to be able to port a wide range of Media contributed modules from Drupal 7. For example, media modules like the Media entity, media entity document, media entity browser, media entity image, and more. However, this module is still hidden from site builders till the porting and fixing of issues are over with.
* **Drupal 8.5**
* One of the top goals that Drupal 8 set out to reach was making rich images, media integration, and asset management easier and better for content authors. It has successfully achieved this goal by adding the Media module now in Core (and it isnt hidden anymore). 
* Content Moderation module is now stable. Defining various levels and statuses of workflow and moving them around is effortless.
* The Layout builder module is introduced as an experimental module. It gives site builders full control and flexibility to customize and built layouts from other layout components, blocks, and regions. This has been one of the top goals for Drupal 8 site builders.
* The Migrate UI module suite that was experimental in Drupal 8.1 is now considered stable.
* Big pipe module which got previously stable in version 8.5, now comes by default in the standard installation profile. All Drupal 8 sites are now faster by default.
* PHP 7.2 is here, and Drupal 8.5 now runs on it and fully supports the new features and performance improvements that it offers.
* **Drupal 8.6**
* The very helpful oEmbed format is now supported in the Drupal 8.6 Media module. The oEmbed API helps in displaying embedded content when a URL for that resource is posted. Also included within the Media module is support for embedding YouTube and Vimeo videos.
* An experimental Media Library module is now in Core. Adding and browsing multiple media is now supported and can also be customized.
* A new demo site called Umami has been introduced that demonstrates Drupal 8's Core features. This installation profile can give a new site builder a peek into Drupals capabilities and allows them to play around with views, fields, and pages for learning purposes. It also acts as an excellent tool for Drupal agencies to showcase Drupal 8 to its customers.
* Workspaces module is introduced as an experimental module. When you have multiple content packages that need to be reviewed (status change) and deployed, this module lets you do all of it together and saves you a lot of time.
* Installing Drupal has now gotten easier with this version. It offers two new easy ways of installing Drupal. One with a "quick start" command that only requires you to have PHP installed. In the other option, the installer automatically identifies if there has been a previous installation and lets you install it from there.
* **Drupal 8.7**
* One of the most significant additions to Drupal Core that went straight there as a stable module is the JSON:API module. It takes forward Drupals API-first initiative and provides an easy way to build decoupled applications.
* The Layout Builder module is now stable and better than ever before. It now even lets you work with unstructured data as well as fieldable entities.
* Media Library module gets a fresh new look with this version release. Marketers and Content editors now have it much easier with the ability to search, attach, drag, and drop media files whenever and wherever they need it.
* Fully supports PHP 7.3.
* Taxonomy and Menu items are revision-able, which means that they can be used in editorial workflows and can be assigned statuses.
* **Drupal 8.8**
* This version is going to be the last minor version of Drupal 8 where you will find new features or deprecations. The next version, Drupal 8.9, will not include any new additions but will be very similar to Drupal 9.0.
* The Media Library module is now stable and ready to use.
* Workspaces module is now enhanced to include adding hierarchical workspaces. This gives more flexibility in the hands of the content editor. It also works with the Content Moderation module now.
* Composer now receives native support and does not need external projects to package Drupal with its dependencies. You can create new projects with just a one-line command using Composer.
* Keeping its promises on making Drupal easier to learn for newbies, a new experimental module for Help Topics has been introduced. Each module, theme, and installation profile can have task-based help topics.
### Opening doors to a wider set of developers
Although Drupal was largely accepted and loved for its flexibility, resilience, and, most of all, its content management abilities, there was a nagging problem—the "deep learning curve" issue. While many Drupalers argue that the deep learning curve is part and parcel of a CMS that can build highly complex and powerful applications, finding Drupal talent is a challenge. Dries, the founder of Drupal, says, "For most people new to Drupal, Drupal 7 is really complex." He also adds that this could be because of holding on to procedural programming, large use of structured arrays, and more such "Drupalisms" (as he calls them).
This issue needed to be tackled. With Drupal 8 adopting modern platforms and standards like object-oriented programming concepts, latest PHP standards, Symfony framework, and design patterns, the doors are now flung wide open to a broad range of talent (site builders, themes, developers).
### Final thoughts
"The whole of science is nothing more than a refinement of everyday thinking." Albert Einstein.
Open source today is more than just free software. It is a body of collaborated knowledge and effort that is revolutionizing the digital ecosystem. The digital world is moving at a scarily rapid pace, and I believe it is only innovation and perseverance from open source communities that can bring it to speed. The Drupal community unwaveringly reinvents and refines itself each day, which is especially seen in the latest release of Drupal 8.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/drupal-8-promises
作者:[Shefali Shetty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shefalishetty
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/drupal_blue_gray_lead.jpeg?itok=eSkFp_ur (Drupal logo with gray and blue)
[2]: https://www.specbee.com/drupal-web-development-services

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source vs. proprietary: What's the difference?)
[#]: via: (https://opensource.com/article/20/2/open-source-vs-proprietary)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open source vs. proprietary: What's the difference?
======
Need four good reasons to tell your friends to use open source? Here's
how to make your case.
![Doodles of the word open][1]
There's a lot to be learned from open source projects. After all, managing hundreds of disparate, asynchronous commits and bugs doesn't happen by accident. Someone or something has to coordinate releases, and keep all the code and project roadmaps organized. It's a lot like life. You have lots of tasks demanding your attention, and you have to tend to each in turn. To ensure everything gets done before its deadline, you try to stay organized and focused.
Fortunately, there are [applications out there][2] designed to help with that sort of thing, and many apply just as well to real life as they do to software.
Here are some reasons for choosing [open tools][3] when improving personal or project-based organization.
### Data ownership
It's rarely profitable for proprietary tools to provide you with [data][4] dumps. Some products, usually after a long battle with their users (and sometimes a lawsuit), provide ways to extract your data from them. But the real issue isn't whether a company lets you extract data; it's the fact that the capability to get to your data isn't guaranteed in the first place. It's your data, and when it's literally what you do each day, it is, in a way, your life. Nobody should have primary access to that but you, so why should you have to petition a company for a copy?
Using an open source tool ensures you have priority access to your own activities. When you need a copy of something, you already have it. When you need to export it from one application to another, you have complete control of how the data is exchanged. If you need to export your schedule from a calendar into your kanban board, you can manipulate and process the data to fit. You don't have to wait for functionality to be added to the app, because you own the data, the database, and the app.
### Working for yourself
When you use open source tools, you often end up improving them, sometimes whether you know it or not. You may not (or you may!) download the source and hack on code, but you probably fall into a way of using the tool that works best for you. You optimize your interaction with the tool. The unique way you interact with your tooling creates a kind of meta-tool: you haven't changed the software, but you've adapted it and yourself in ways that the project author and a dozen other users never imagined. Everyone does this with whatever software they rely upon, and it's why sitting at someone else's computer to use a familiar software (or even just looking over someone's shoulder) often feels foreign, like you're using a different version of the application than you're used to.
When you do this with proprietary software, you're either contributing to someone else's marketplace for free, or you're adjusting your own behavior based on forces outside your own control. When you optimize an open source tool, both the software and the interaction belong to you.
### The right to not upgrade
Tools change. It's the way of things.
Change can be frustrating, but it can be crippling when a service changes so severely that it breaks your workflow. A proprietary service has and maintains every right to change its product, and you explicitly accept this by using the product. If your favorite accounting software or scheduling web app changes its interface or its output options, you usually have no recourse but to adapt or stop using the service. Proprietary services reserve the right to remove features, arbitrarily and without warning, and it's not uncommon for companies to start out with an open API and strong compatibility with open source, only to drop these conveniences once its customer base has reached critical mass.
Open source changes, too. Changes in open source can be frustrating, too, and it can even drive users to alternative open source solutions. The difference is that when open source changes, you still own the unchanged code base. More importantly, lots of other people do too, and if there's enough desire for it, the project can be forked. There are several famous examples of this, but admittedly there are just as many examples where the demand was _not_ great enough, and users essentially had to adapt.
Even so, users are never truly forced to do anything in open source. If you want to hack together an old version of your mission-critical service on an old distro running ancient libraries in a virtual machine, you can do that because you own the code. When a proprietary service changes, you have no choice but to follow.
With open source, you can choose to forge your own path when necessary or follow the developers when convenient.
### Open for collaboration
Proprietary services can affect others in ways you may not realize. Closed source tools are accidentally insidious. If you use a proprietary product to manage your schedule or your recipes or your library, or you use a proprietary font in your graphic design or website, then the moment you need to coordinate with someone else, you are essentially forcing them to sign up for the same proprietary service because proprietary services usually require accounts. Of course, the same is sometimes true for an open source solution, but it's not common for open source products to collect and sell user data the way proprietary vendors do, so the stakes aren't quite the same.
### Independence
Ultimately, the open source advantage is one of independence for you and for those you want to collaborate with. Not everyone uses open source, and even if everyone did not everyone would use the exact same tool or the same assets, so there will always be some negotiation when sharing data. However, by keeping your data and projects open, you enable everyone (your future self included) to contribute.
What steps do you take to ensure your work is open and accessible? Tell us in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-vs-proprietary
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_doodles.png?itok=W_0DOMM4 (Doodles of the word open)
[2]: https://opensource.com/article/20/1/open-source-productivity-tools
[3]: https://opensource.com/tags/tools
[4]: https://opensource.com/tags/analytics-and-metrics

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 firewall features IT pros should know about but probably dont)
[#]: via: (https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
5 firewall features IT pros should know about but probably dont
======
As a foundational network defense, firewalls continue to be enhanced with new features, so many that some important ones that shouldnt be get overlooked.
Natalya Burova / Getty Images
[Firewalls][1] continuously evolve to remain a staple of network security by incorporating functionality of standalone devices, embracing network-architecture changes, and integrating outside data sources to add intelligence to the decisions they make a daunting wealth of possibilities that is difficult to keep track of.
Because of this richness of features, next-generation firewalls are difficult to master fully, and important capabilities sometimes can be, and in practice are, overlooked.
Here is a shortlist of new features IT pros should be aware of.
**[ Also see [What to consider when deploying a next generation firewall][2]. | Get regularly scheduled insights by [signing up for Network World newsletters][3]. ]**
### Also in this series:
* [Cybersecurity in 2020: From secure code to defense in depth][4] (CSO)
* [More targeted, sophisticated and costly: Why ransomware might be your biggest threat][5] (CSO)
* [How to bring security into agile development and CI/CD][6] (Infoworld)
* [UEM to marry security finally after long courtship][7] (Computerworld)
* [Security vs. innovation: IT's trickiest balancing act][8] (CIO)
## Network segmentation
Dividing a single physical network into multiple logical networks is known as [network segmentation][9] in which each segment behaves as if it runs on its own physical network. The traffic from one segment cant be seen by or passed to another segment.
This significantly reduces attack surfaces in the event of a breach. For example, a hospital could put all its medical devices into one segment and its patient records into another. Then, if hackers breach a heart pump that was not secured properly, that would not enable them to access private patient information.
Its important to note that many connected things that make up the [internet of things][10] have older operating systems and are inherently insecure and can act as a point of entry for attackers, so the growth of IoT and its distributed nature drives up the need for network segmentation.
## Policy optimization
Firewall policies and rules are the engine that make firewalls go. Most security professionals are terrified of removing older policies because they dont know when they were put in place or why. As a result, rules keep getting added with no thought of reducing the overall number. Some enterprises say they have millions of firewall rules in place. The fact is, too many rules add complexity, can conflict with each other and are time consuming to manage and troubleshoot.
[][11]
Policy optimization migrates legacy security policy rules to application-based rules that permit or deny traffic based on what application is being used. This improves overall security by reducing the attack surface and also provides visibility to safely enable application access. Policy optimization identifies port-based rules so they can be converted to application-based whitelist rules or add applications from a port-based rule to an existing application-based rule without compromising application availability. It also identifies over-provisioned application-based rules. Policy optimization helps prioritize which port-based rules to migrate first, identify application-based rules that allow applications that arent being used, and analyze rule-usage characteristics such as hit count, which compares how often a particular rule is applied vs. how often all the rules are applied.
Converting port-based rules to application-based rules improves security posture because the organization can select the applications they want to whitelist and deny all other applications. That way unwanted and potentially malicious traffic is eliminated from the network.
## Credential-theft prevention
Historically, workers accessed corporate applications from company offices. Today they access legacy apps, SaaS apps and other cloud services from the office, home, airport and anywhere else they may be. This makes it much easier for threat actors to steal credentials. The Verizon [Data Breach Investigations Report][12] found that 81% of hacking-related breaches leveraged stolen and/or weak passwords.
Credential-theft prevention blocks employees from using corporate credentials on sites such as Facebook and Twitter.  Even though they may be sanctioned applications, using corporate credentials to access them puts the business at risk.
Credential-theft prevention works by scanning username and password submissions to websites and compare those submissions to lists of official corporate credentials. Businesses can choose what websites to allow submitting corporate credentials to or block them based on the URL category of the website.
When the firewall detects a user attempting to submit credentials to a site in a category that is restricted, it can display a block-response page that prevents the user from submitting credentials. Alternatively, it can present a continue page that warns users against submitting credentials to sites classified in certain URL categories, but still allows them to continue with the credential submission. Security professionals can customize these block pages to educate users against reusing corporate credentials, even on legitimate, non-phishing sites.
## DNS security
A combination of machine learning, analytics and automation can block attacks that leverage the [Domain Name System (DNS)][13]. In many enterprises, DNS servers are unsecured and completely wide open to attacks that redirect users to bad sites where they are phished and where data is stolen. Threat actors have a high degree of success with DNS-based attacks because security teams have very little visibility into how attackers use the service to maintain control of infected devices. There are some standalone DNS security services that are moderately effective but lack the volume of data to recognize all attacks.
When DNS security is integrated into firewalls, machine learning can analyze the massive amount of network data, making standalone analysis tools unnecessary. DNS security integrated into a firewall can predict and block malicious domains through automation and the real-time analysis that finds them.  As the number of bad domains grows, machine learning can find them quickly and ensure they dont become problems.
Integrated DNS security can also use machine-learning analytics to neutralize DNS tunneling, which smuggles data through firewalls by hiding it within DNS requests. DNS security can also find malware command-and-control servers.  It builds on top of signature-based systems to identify advanced tunneling methods and automates the shutdown of DNS-tunneling attacks.
## Dynamic user groups
Its possible to create policies that automate the remediation of anomalous activities of workers. The basic premise is that users roles within a group means their network behaviors should be similar to each other. For example, if a worker is phished and strange apps were installed, this would stand out and could indicate a breach.
Historically, quarantining a group of users was highly time consuming because each member of the group had to be addressed and policies enforced individually. With dynamic user groups, when the firewall sees an anomaly it creates policies that counter the anomoly and pushes them out to the user group. The entire group is automatically updated without having to manually create and commit policies. So, for example, all the people in accounting would receive the same policy update automatically, at once, instead of manually, one at a time. Integration with the firewall enables the firewall to distribute the policies for the user group to all the other infrastructure that requires it including other firewalls, log collectors or applications. 
Firewalls have been and will continue to be the anchor of cyber security. They are the first line of defense and can thwart many attacks before they penetrate the enterprise network.  Maximizing the value of firewalls means turning on many of the advanced features, some of which have been in firewalls for years but not turned on for a variety of reasons.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.csoonline.com/article/3519913/cybersecurity-in-2020-from-secure-code-to-defense-in-depth.html
[5]: https://www.csoonline.com/article/3518864/more-targeted-sophisticated-and-costly-why-ransomware-might-be-your-biggest-threat.html
[6]: https://www.infoworld.com/article/3520969/how-to-bring-security-into-agile-development-and-cicd.html
[7]: https://www.computerworld.com/article/3516136/uem-to-marry-security-finally-after-long-courtship.html
[8]: https://www.cio.com/article/3521009/security-vs-innovation-its-trickiest-balancing-act.html
[9]: https://www.networkworld.com/article/3016565/how-network-segmentation-provides-a-path-to-iot-security.html
[10]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[12]: https://enterprise.verizon.com/resources/reports/dbir/
[13]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -1,74 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Future smart walls key to IoT)
[#]: via: (https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Future smart walls key to IoT
======
MIT researchers are developing a wallpaper-like material thats made up of simple RF switch elements and can be applied to building surfaces. Using beamforming, the antenna array could potentially improve wireless signal strength nearly tenfold.
Jason Dorfman, MIT CSAIL
IoT equipment designers shooting for efficiency should explore the potential for using buildings as antennas, researchers say.
Environmental surfaces such as walls can be used to intercept and beam signals, which can increase reliability and data throughput for devices, according to MIT's Computer Science and Artificial Intelligence Laboratory ([CSAIL][1]).
Researchers at CSAIL have been working on a smart-surface repeating antenna array called RFocus. The antennas, which could be applied in sheets like wallpaper, are designed to be incorporated into office spaces and factories. Radios that broadcast signals could then become smaller and less power intensive.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“Tests showed that RFocus could improve the average signal strength by a factor of almost 10,” CSAIL's Adam Conner-Simons [writes in MIT News][3]. “The platform is also very cost-effective, with each antenna costing only a few cents.”
The prototype system CSAIL developed uses more than 3,000 antennas embedded into sheets, which are then hung on walls. In future applications, the antennas could adhere directly to the wall or be integrated during building construction.
“People have had things completely backwards this whole time,” the article claims. “Rather than focusing on the transmitters and receivers, what if we could amplify the signal by adding antennas to an external surface in the environment itself?”
RFocus relies on [beamforming][4]; multiple antennas broadcast the same signal at slightly different times, and as a result, some of the signals cancel each other and some strengthen each other. When properly executed, beamforming can focus a stronger signal in a particular direction.
[][5]
"The surface does not emit any power of its own," the developers explain in their paper ([PDF][6]). The antennas, or RF switch elements, as the group describes them, either let a signal pass through or reflect it through software. Signal measurements allow the apparatus to define exactly what gets through and how its directed.
Importantly, the RFocus surface functions with no additional power requirements. The “RFocus surface can be manufactured as an inexpensive thin wallpaper requiring no wiring,” the group says.
### Antenna design
Antenna engineering is turning into a vital part of IoT development. It's one of the principal reasons data throughput and reliability keeps improving in wireless networks.
Arrays where multiple, active panel components make up antennas, rather than a simple passive wire, as is the case in traditional radio, is an example of advancements in antenna engineering.
[Spray-on antennas][7] (unrelated to the CSAIL work) is another in-the-works technology I've written about. In that case, flexible substrates create the antenna, which is applied in a manner that's similar to spray paint. Another future direction could be anti-laser antennas: [Reversing a laser][8], where the laser becomes an absorber of light rather than the sender of it, could allow all data-carrying energy to be absorbed, making it the perfect light-based antenna.
Development of 6G wireless, which is projected to supersede 5G sometime around 2030, includes efforts to figure out how to directly [couple antennas to fiber][9]—the radio ends up being part of the cable, in other words.
"We cant get faster internet speeds without more efficient ways of delivering wireless signals," CSAILs Conner-Simons says.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.csail.mit.edu/
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: http://news.mit.edu/2020/smart-surface-smart-devices-mit-csail-0203
[4]: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
[5]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[6]: https://drive.google.com/file/d/1TLfH-r2w1zlGBbeeM6us2sg0yq6Lm2wF/view
[7]: https://www.networkworld.com/article/3309449/spray-on-antennas-will-revolutionize-the-internet-of-things.html
[8]: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html
[9]: https://www.networkworld.com/article/3438337/how-6g-will-work-terahertz-to-fiber-conversion.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,75 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Who should lead the push for IoT security?)
[#]: via: (https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Who should lead the push for IoT security?
======
Industry groups and governmental agencies have been taking a stab at rules to improve the security of the internet of things, but so far theres nothing comprehensive.
Thinkstock
The ease with which internet of things devices can be compromised, coupled with the potentially extreme consequences of breaches, have prompted action from legislatures and regulators, but what group is best to decide?
Both the makers of [IoT][1] devices and governments are aware of the security issues, but so far they havent come up with standardized ways to address them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“The challenge of this market is that its moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but theyll quickly become outdated.”
The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns:
* device passwords would have to be unique, and resetting them to factory defaults would be prohibited
* device makers would have to offer a public point of contact for the disclosure of vulnerabilities
* device makers would have to “explicitly state the minimum length of time for which the device will receive security updates”
This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though theyre being imposed on limited jurisdictions. Thats because its expensive for device makers to create separate versions of their products.
IoT-specific regulations arent the only ones that can have an impact on the marketplace. Depending on the type of information a given device handles, it could be subject to the growing list of data-privacy laws being implemented around the world, most notably Europes General Data Protection Regulation, as well as industry-specific regulations in the U.S. and elsewhere.
The U.S. Food and Drug Administration, noted Maxim, has been particularly active in trying to address device-security flaws. For example, last year it issued [security warnings][3] about 11 vulnerabilities that could compromise medical IoT devices that had been discovered by IoT security vendor [Armis][4]. In other cases it issued fines against healthcare providers.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
But theres a broader issue with devising definitive regulation for IoT devices in general, as opposed to prescriptive ones that simply urge manufacturers to adopt best practices, he said.
Particular companies might have integrated security frameworks covering their vertically integrated products such as an [industrial IoT][6] company providing security across factory floor sensors but that kind of security is incomplete in the multi-vendor world of IoT.
Perhaps the closest thing to a general IoT-security standard is currently being worked on by Underwriters Laboratories (UL), the security-testing non-profit best known for its century-old certification program for electrical equipment. ULs [IoT Security Rating Program][7] offers a five-tier system for ranking the security of connected devices bronze, silver, gold, platinum and diamond.
Bronze certification means that the device has addressed the most glaring security flaws, similar to those outlined in the recent U.K. and California legislations. [The higher ratings][8] include capabilities like ongoing security maintenance, improved access control and known threat testing.
While government regulation and voluntary industry improvements can help keep future IoT systems safe, neither addresses two key issues in the IoT security puzzle the millions of insecure devices that have already been deployed, and user apathy around making their systems as safe as possible, according to Maxim.
“Requiring a non-default passwords is good, but that doesnt stop users from setting insecure passwords,” he warned. “The challenge is, do customers care? Are they willing to pay extra for products with that certification?”
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.fda.gov/medical-devices/safety-communications/urgent11-cybersecurity-vulnerabilities-widely-used-third-party-software-component-may-introduce
[4]: https://www.armis.com/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://www.networkworld.com/article/3243928/what-is-the-industrial-internet-of-things-essentials-of-iiot.html
[7]: https://ims.ul.com/iot-security-rating-levels
[8]: https://www.cnx-software.com/2019/12/30/ul-iot-security-rating-system-ranks-iot-devices-security-from-bronze-to-diamond/
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,91 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why innovation can't happen without standardization)
[#]: via: (https://opensource.com/open-organization/20/2/standardization-versus-innovation)
[#]: author: (Len Dimaggio https://opensource.com/users/ldimaggi)
Why innovation can't happen without standardization
======
Balancing standardization and innovation is critical during times of
organizational change. And it's an ongoing issue in open organizations,
where change is constant.
![and old computer and a new computer, representing migration to new software or hardware][1]
Any organization facing the prospect of change will confront an underlying tension between competing needs for standardization and innovation. Achieving the correct balance between these needs can be essential to an organization's success.
Experiencing too much of either can lead to morale and productivity problems. Over-stressing standardization, for example, can have a stifling effect on the team's ability to innovate to solve new problems. Unfettered innovation, on the other hand, can lead to time lost due to duplicated or misdirected efforts.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change. In this article, I'll outline various considerations your organization might make when attempting to strike this critical balance.
### The need for standardization
When North American beavers hear running water, they instinctively start building a dam. When some people see a problem, they look to build or buy a new product or tool to solve that problem. Technological advances make modeling business process solutions or setting up production or customer-facing systems much easier than in the past. The ease with which organizational actors can introduce new systems can occasionally, however, lead to problems. Duplicate, conflicting, or incompatible systems—or systems that, while useful, do not address a team's highest priorities—can find their way into organizations, complicating processes.
This is where standardization can help. By agreeing on and implementing a common set of tools and processes, teams become more efficient, as they reduce the need for new development methods, customized training, and maintenance.
Standardization has several benefits:
* **Reliability, predictability, and safety.** Think about the electricity in your own home and the history of electrical systems. In the early days of electrification, companies competed to establish individual standards for basic elements like plug configurations and safety requirements like insulation. Thanks to standardization, when you buy a light bulb today you can be sure that it will fit and not start a fire.
* **Lower costs and more dependable, repeatable processes.** Standarsization frees people in organizations to focus more attention on other things—products, for instance—and not on the need to coordinate the use of potentially conflicting new tools and processes. And it can make people's skills more portable (or, in budgeting terms more "fungible") across projects, since all projects share a common set of standards. In addition to helping project teams be more flexible, this portability of skills makes it easier for people to adopt new assignments.
* **Consistent measurements.** Creating a set of consistent metrics people can use to assess product quality across multiple products or multiple releases of individual products is possible through standardization. Without it, applying this kind of consistent measurement to product quality and maintaining any kind of history of tracking such quality can be difficult. Standardization effectively provides the organization a common language for measuring quality.
A danger of standardization arises when it becomes an all-consuming end in itself. A constant push to standardize can result in it inadvertently stifling creativity and innovation. If taken too far, policies that over emphasize standardization appear to discourage support for people's need to find new solutions to new problems. Taken to an extreme, this can lead to a suffocating organizational atmosphere in which people are reluctant to propose new solutions in the interest of maintaining standardization or conformity. In an open organization especially focused on generating new value and solutions, an attempt to impose standardization can have a negative impact on team morale.
Viewing new challenges through the lens of former solutions is natural. Likewise, it's common (and in fact generally practical) to apply legacy tools and processes to solving new problems.
But in open organizations, change is constant. We must always adapt to it.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change.
### The need for innovation
Digital technology changes at a rapid rate, and that rate of change is always increasing. New opportunities result in new problems that require new solutions. Any organization must be able to adapt and its people must have the freedom to innovate. This is even more important in an open organization and with open source source software, as many of the factors (e.g., restrictive licenses) that blocked innovation in the past no longer apply.
When considering the prospect of innovation in your organization, keep in mind the following:
* **Standardization doesn't have to be the end of innovation.** Even tools and processes that are firmly established and in use by an organization were once very new and untried, and they only came about through processes of organizational innovation.
* **Progress through innovation also involves failure.** It's very often the case that some innovations fail, but when they fail, they point the way forward to solutions. This progress therefore requires that an organization protect the freedom to fail. (In competitive sports, athletes and teams seldom learn lessons from easy victories; they learn lessons about how to win, including how to innovate to win, from failures and defeats.)
Freedom to innovate, however, cannot be freedom to do whatever the heck we feel like doing. The challenge for any organization is to be able to encourage and inspire innovation, but at the same time to keep innovation efforts focused towards meeting your organization's goals and to address the problems that you're trying to solve.
In closed organizations, leaders may be inclined to impose rigid, top-down limits on innovation. A better approach is to instead provide a direction or path forward in terms of goals and deliverables, and then enable people to find their own ways along that path. That forward path is usually not a straight line; [innovation is almost never a linear process][2]. Like a sailboat making progress into the wind, it's sometimes [necessary to "tack" or go sideways][3] in order to make forward progress.
### Blending standardization with focused innovation
Are we doomed to always think of standardization as the broccoli we must eat, while innovation is the ice cream we want to eat?
Are we doomed to always think of standardization as the broccoli we _must_ eat, while innovation is the ice cream we _want_ to eat?
It doesn't have to be this way.
Perceptions play a role in the conflict between standardization and innovation. People who only want to focus on standardization must remember that even the tools and processes that they want to promote as "the standard" were once new and represented change. Likewise, people who only want to focus on innovation have to remember that in order for a tool or process to provide value to an organization, it has to be stable enough for that organization to use it over time.
An important element of any successful organization, especially an open organization where everyone is free to express their views, is empathy for other people's views. A little empathy is necessary for understanding both perceptions of impending change.
I've always thought about standardization and innovation as being two halves of one solution. A good analogy is that of college course catalog. In many colleges, all incoming first-year students regardless of their major will take a core set of classes. These core classes can cover a wide range of subjects and provide each student with an educational foundation. Every student receives a standard grounding in these disciplines regardless of their major course of study. Beyond the standardized core curriculum, then, each student is free to take specialized courses depending upon his or her major degree requirements and selected elective courses, as they work to innovate in their respective fields.
Similarly, standardization provides a foundation on which innovation can build. Think of standardization as a core set of tools and practices you might applied to _all_ products. Innovation can take the form of tools and practices that go _above and beyond_ this standard. This will enable every team to extend the core set of standardized tools and processes to meet the individual needs of their own specific projects. Standardization does not mean that all forward-looking actions stop. Over time, what was an innovation can become a standard, and thereby make room for the next innovation (and the next).
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/standardization-versus-innovation
作者:[Len Dimaggio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ldimaggi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://opensource.com/open-organization/19/6/innovation-delusion
[3]: https://opensource.com/open-organization/18/5/navigating-disruption-1

View File

@ -1,67 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A SASE Crash Course)
[#]: via: (https://www.networkworld.com/article/3526455/a-sase-crash-course.html)
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
A SASE Crash Course
======
Get up to speed fast on the Secure Access Service Edge, an emerging converged networking and security category that Gartner has labelled transformational.
peshkov
2020! What could better motivate you to push ahead with your resolutions and organizations digital transformation than a new year AND a new decade. As you put together your digital strategy, check out a new transformation-empowering (and transformational) technology category Gartner coined the [Secure Access Service Edge][1] or SASE (pronounced “Sassy”). SASE converges wide area networking and identity-based security into a cloud service targeted directly to your branch offices, mobile users, cloud services, and even IoT devices, wherever they happen to be. The result: consistently high WAN performance, security, productivity, agility, and flexibility across the global, mobile, cloud-enabled enterprise.
To jumpstart your research into one of the few networking categories Gartner has labelled “transformational,” weve put together a very workable SASE crash course and reading list. Each lesson helps you dig a little deeper into SASE, so you can develop a good grasp of its components and transformational potential.
**Lesson 1: SASE as Defined by Gartner**
So, what is SASE exactly and why should you care? SASE was coined by [Gartner][2] analysts Neil McDonald and Joe Skorupa in a [July 29, 2019 Networking Hype Cycle][3] [Market Trends Report, How to Win as WAN Edge and Security Converge into the Secure Access Service Edge][4] and an August 30, 2019 [Gartner][2] report, The Future of Network Security is in the Cloud. If you dont have access to these reports, Cato quotes the highlights of the former word for word in this short blog: [The Secure Access Service Edge (SASE), as Described in Gartners Hype Cycle for Enterprise Networking, 2019][5]. Its a great place to get started on exactly what Gartner has to say about SASE and its drivers, likely development, and place in the digitally transforming enterprise. There are also some valuable links to more information on SASE and exactly how the Cato cloud fits into the SASE trend.
**Lesson 2: What SASE _Is_ and What It _Isnt_**
After Gartner piques your interest, get some valuable insight from Cato in this blog: [The Secure Access Service Edge (SASE): Heres Where Your Digital Business Network Starts][6]. Here you can learn why convergence of wide area networking and security is absolutely vital for the agile, digitally transforming enterprise and why legacy data center-centric solutions cant deliver any more in a world of user mobility and the cloud. This blog breaks down the four essential attributes of SASE—identity driven, cloud native, support for all edges, and globally distributed—in detail. It also explains why SASE is _not_ anything like telco-managed services and summarizes how Cato delivers SASE effectively.
**Lesson 3: How Cato Delivers SASE**
Sometimes visual/audio-based learning can bring things into better focus than straight text, and few people are better at explaining WAN and security concepts than Yishay Yovel, Cato Networks Chief Marketing Officer. In this short, 17-minute video presentation, [Intro to SASE by Yishay][7], Yishay digs into Gartners take on SASE, why WAN and security need to converge, and why SASE is one of only three (out of 29) Networking Hype Cycle categories that Gartner has labeled “transformational.” Yishay gets into a lot of nitty-gritty SASE details and offers valuable perspective on how Cato Networks delivers a complete cloud-native SASE software stack that supports all edges and is identity-driven, scalable, flexible, and easy to deploy and manage. Yishay also explains clearly why some of the other WAN and security solutions out there dont fulfill some essential requirements of SASE, such as processing traffic close to the source. For visual learners, there are also some great architectural diagrams.
**Lesson 4: Gartner Webinar Breaks Down SASE and its Implications**
Youve heard it from Yishay, now hear it from Gartners VP Distinguished Analyst Neil MacDonald _and_ Yishay in this 37-minute [Gartner Webinar: Is SASE the Future of SD-WAN and Network Security][8]? MacDonald explains SASE elements and drivers in depth, why SASE belongs in the cloud, how enterprises will adopt SASE, and how organizations should evaluate SASE offerings. Theres some good detail here on how SASE works in different contexts and scenarios, such as a mobile employee connecting to Salesforce securely from the airport, a contractor accessing a Web application from an unmanaged device, and even wind turbines collecting and aggregating data and sending it to the cloud for processing. Neil digs into core SASE requirements and recommends additional services and some other useful options. Yishay then takes over with why Cato is the worlds first true SASE platform.
**Lesson 5: The White Paper**
But wait, theres more. Heres a clear and concise white paper from Cato, [The Network for the Digital Business Starts with the Secure Access Service Edge][9]. This is a good piece to give out to the other digital transformation stakeholders in your business if you want them to get up to speed on SASE fast. Its a quick read that explains why the digital, mobile, cloud-enabled business needs a new converged network/security model. It also covers the four elements of SASE, core SASE capabilities, SASE benefits, and clear examples of what SASE _isnt_ and why. It describes the features that make Cato one of the most comprehensive SASE offerings on the market. Its a clear, concise presentation broken into short paragraphs and bullet points to provide a fast introduction to SASE and the Cato Cloud.
**Lesson 6: Icing on the Cake: The Short and Sweet Video**
[SASE (Secure Access Service Edge)][10] is a short YouTube video to go along with the white paper, combining perspective and information from Gartner and Cato on why you need SASE simplicity for your digital transforming business.
We hope you have a happy, healthy, transforming New Year. To accelerate your organizations digital transformation over the next decade, get up to speed on SASE with these useful blogs, videos, and white papers and find out how SASE can help you make that transformation happen quickly and more easily.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526455/a-sase-crash-course.html
作者:[Cato Networks][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: http://www.catonetworks.com/sase?utm_source=idg
[2]: https://en.wikipedia.org/wiki/Gartner
[3]: https://www.gartner.com/en/documents/3947237
[4]: https://www.gartner.com/en/documents/3953690/market-trends-how-to-win-as-wan-edge-and-security-conver
[5]: https://www.catonetworks.com/blog/the-secure-access-service-edge-sase-as-described-in-gartners-hype-cycle-for-enterprise-networking-2019/
[6]: https://www.catonetworks.com/blog/the-secure-access-service-edge-sase?utm_source=idg
[7]: https://catonetworks.wistia.com/medias/kn86smj7q4
[8]: https://go.catonetworks.com/VOD-REG-Gartner-SASE?utm_source=idg
[9]: https://go.catonetworks.com/The-Network-Starts-with-SASE?utm_source=idg
[10]: https://www.youtube.com/watch?v=gLN4NUbjml8

View File

@ -1,63 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Server sales projected to decline 10% due to coronavirus)
[#]: via: (https://www.networkworld.com/article/3526605/server-sales-projected-to-decline-10-due-to-coronavirus.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Server sales projected to decline 10% due to coronavirus
======
Demand isnt tapering off, but China is grinding to a halt under the strain of the pandemic.
Writerfantast / Getty Images
Global server sales had been projected to grow by 1.2% compared to the most recent quarter, but the chaos wrought by the coronavirus in China will cause sales to decline 9.8% sequentially, according to DigiTimes Research.
DigiTimes is an IT publication based in Taiwan. Its proximity to Taiwanese and Chinese vendors gives it some good sources, but it can also be way off target. However, the signs are piling up that coronavirus is causing some real mayhem.
For example, DigiTimes also [reported][1] that less than 20% of Chinese factory employees would return to work after an extended Lunar New Year break due to the coronavirus outbreak, and that many components plants in China have decided not to restart production until February 25.
**[ Now read: [What is quantum computing (and why enterprises should care)][2] ]**
The Lunar New Year was January 25, so that means Chinese factories have been idle for a month. Thats a lot of supply not being answered, and DigiTimes notes that server demand from large data centers remained strong in the first quarter of 2020.
Facebook in particular is interested in buying high-density models from white box vendors like Wiwynn and Quanta Computer, but due to the outbreak, these orders, which were originally scheduled for shipment this quarter, have been postponed.
So its not like an economic crash is causing sales to go off a cliff like in 2008. Demand is there, but China cant make the product right now. This year was expected to be a good year for the server vendors, with all of them projecting sales increases over last year. AMD is ramping up Epyc production, and Intel is expected to release its next-generation “Ice Lake” Xeon platform in the third or the fourth quarter of 2020.
The good news here is that Wuhan isnt a major tech manufacturing hub. It does have five display fabs, both LCD and OLED, but so does Shanhai. However, Wuhan has the most advanced display fabs, producing flexible OLEDs, and has the largest capacity, according to David Hsieh, senior director, displays, at Omdia.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
Vladimir Galabov, principal analyst for data-center compute in Omdias cloud and data-center research practice, also expects to see server shipments impacted by the coronavirus driving a prolonged holiday period in China.
“I think the majority of the hit will be in the Chinese market,” he said. “This does impact server shipments globally as China represents about 30% of server shipments worldwide. So, I expect the quarterly decline to be more significant than the seasonal 10%. I expect that China will have a 5% additional downward impact on the growth.”
He added that Q4 of 2019 did significantly overachieve his expectations due to cloud service providers making massive purchases. Omdia expected servers shipped in 2019 to be flat compared to 2018 based on data from 1Q19-3Q19. Instead, it was up 2% to 3% for the year, thanks to the fourth-quarter spurt.
And servers arent the only products taking a hit. DigiTimes says that should the outbreak of the coronavirus last until June, sales of smartphones in the country would be slashed by about 30%, from a projected 400 million units to 280 million units in 2020.
Also, Mobile World Congress in Barcelona is cancelled due to concerns about the global coronavirus outbreak. The [official cancellation][4] came after a number of big-name companies, including Intel, Cisco, Amazon, Sony, NTT Docomo LG, ZTE, Nvidia, and Ericsson, bowed out of various events that were set for the show. 
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526605/server-sales-projected-to-decline-10-due-to-coronavirus.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.digitimes.com/news/a20200210VL202.html
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.mwcbarcelona.com/attend/safety-security/gsma-statement-on-mwc-2020/
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 steps for product marketing your open source project)
[#]: via: (https://opensource.com/article/20/2/product-marketing-open-source-project)
[#]: author: (Kevin Xu https://opensource.com/users/kevin-xu)
3 steps for product marketing your open source project
======
Marketing an open source project is mainly about education, and
traditional marketing techniques do not apply.
![People meeting][1]
I frequently get questions from open source project creators or new founders of commercial open source software (COSS) companies about the best way to market their product. Implicit in that inquiry lies more foundational questions: "What the hell is product marketing? How much time should I spend on it?"
This article aims to share some knowledge and specific action items to help open source creators understand product marketing as a concept and how to bootstrap it on their own until a project reaches the next level of traction.
### What is product marketing?
Product marketing for COSS is materially different from product marketing for proprietary software and from general marketing practices like ads, lead generation, sponsorships, booths at conferences and trade shows, etc. Because the source code is open for all to see and the project's evolutionary history is completely transparent, you need to articulate—from a technical level to a technical audience—how and why your project works.
Using the word "marketing" in this context is, in fact, misleading. It's really about product _education_. Your role is more like a coach, mentor, or teaching assistant in a computer science class or a code bootcamp than a "marketing person."
Proprietary software products rarely need this level of technical education because no one can see the source code anyway. Therefore, these companies focus on educating their audience about the product's business value, not its technical advantages.
To build a successful open source project (and any commercial product that may be derived from it), you must educate your audience on _both_ its technical details and business value.
While this may sound like extra work, it's an advantage inherent to COSS because so much buying power for technology products is shifting to developers. They care deeply about technical details and want to see and understand the source code. Being able to learn, appreciate, and have confidence in a project's technical design, architecture, and future roadmap are key to its adoption.
Also, developers often treat open source technology as a way to scratch their technical itch and stay sharp in a fast-moving technology landscape. It's an audience that yearns for education, above all.
Being able to speak to an audience that has these goals and desires is what product marketing and education in the COSS context is all about.
### How to bootstrap product marketing
So you (or maybe one or two other engineers) are laboring away to create your open source project, likely in the evening after your day job or on the weekends. How do you bootstrap some effective product marketing on your own?
I recommend a three-step process to yield the best return for your time:
1. Peruse online forums
2. Write content
3. Do in-person meetups
#### Online forums
Rummaging through forums—from general ones like HackerNews and Reddit to ones like Discourse or Slack channels geared to projects that are closely related to what you are building—is a great way to figure out what questions developers have in your space. Starting with this step is less about inserting your project into the discussion and more about gathering ideas on what you should focus on when putting together educational materials about your project.
Effectively, what you are doing is akin to "listening to your customer."
Let's be honest; you already spend a lot of time on these forums anyway. The only change is one of mindset, not behavior: Have more focus, jot ideas down actively, practice absorbing critiques (you may see threads critical of your project), and develop some intuition about what developers are thinking about.
This step assumes you don't already have an active community where developers are asking questions directly. The long-term goal is to build your own community, and good product marketing directly helps with this.
#### Write
Now that you have gathered some ideas, it's time to produce some content. Compared to formats like videos and podcasts, _writing_ is the highest-leveraged medium. It has the best long-tail benefits, is most suited for ongoing reference material, and can be most easily repackaged into other mediums. Another factor: open source has a global audience, many of whom might speak English as a second (third, or fourth) language, and written content is easily consumable at a person's own pace.
Focus your writing on three categories that answer three fundamental questions:
* What problem does your project solve? In other words: _Why should it exist?_
* How is the project architected, and why is it done that way? _Is this a technically well-designed solution that has potential, thus worth investing time in?_
* How do I get a taste of it? _How quickly can I get some value out of it?_ This is crucial to reducing your time-to-value metric to the shortest amount possible. For more on this topic, please read my article [_A framework for building products from open source projects_][2].
A smart way to begin is by writing three blog posts, each addressing one of the three points. The posts should be canonical to your specific project so that repackaging them into different formats (e.g., slide decks, Quora answers, Twitter threads, podcast interviews, etc.) for different channels should be straightforward.
After you publish the posts, work the materials into your GitHub, GitLab, Bitbucket, or other repository along with the project's documentation. This is important because your public repo will likely be the face of your project for a long time, even if also you have a dedicated website. A repo with strong educational content will go a long way in building your social proof in the form of stars, forks, and downloads and may even yield some contributions.
One note on writing: Be patient! Your words likely won't go viral overnight (unless you are a celebrity developer). But if the material is educational, useful, and accessible (no need for fancy language), it will draw attention to your project in time. You do your part, and let Google's SEO algorithm do its part.
#### In-person meetups
With a few posts out in the wild, the next step is to find an in-person meetup where you can give a presentation about your project using your writing as foundational material to build a compelling talk.
You may wonder: "Why? Isn't doing something in-person the biggest time suck? I'd rather code!"
True. You are not wrong. I recommend this step _specifically_ at this moment, not earlier or later, because you'll get feedback on your output more quickly than what the internet can give. Comments and feedback on your posts will trickle in, but giving a talk at a meetup, taking questions, and chatting with attendees afterward over pizza is valuable and immediate.
The goal is not to shamelessly pitch your project (reminder: you are an educator, not a marketer), but to listen for the kinds of questions you get when you put your project (and yourself) out there. Another benefit is that it gives you practice delivering presentations, which will become important as your project grows, and you need to present in higher-stakes situations, including large conferences, demos with prospective users, etc.
I know this may not be practical if you don't live in a tech hub where meetups are aplenty. You may want to look for groups that are open to doing virtual meetups via video or work this into your existing travel plans. (But don't fly across the world to talk at one meetup.)
In-person meetups can feel scary. Public speaking is not for everyone, and it's a legitimate source of fear. My main tips: Just think of yourself as free entertainment, lower your expectations, don't overthink it, and offer yourself up to meetup organizers proactively because they will love you! Having been both a presenter and a meetup organizer, I know developer-focused meetups are very hungry for good technical education.
### Final words
There's a lot more nuance, strategy, and sheer work to effective product marketing, but I hope this post gives you enough guidance and specific action items to bootstrap it. Ultimately, you should still spend the bulk of your time building your technology. And if you have some revenue or funding, it's worth hiring someone who has deep expertise in product marketing, even as a part-time adviser.
Frankly, product marketing talent is hard to find. You need someone with both the technical chops and curiosity to learn about your project on a deep level and the communication skills to compellingly tell the world about it.
* * *
_This article originally appeared on [COSS Media][3] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/product-marketing-open-source-project
作者:[Kevin Xu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kevin-xu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK (People meeting)
[2]: https://opensource.com/article/19/11/products-open-source-projects
[3]: https://coss.media/open-source-creator-product-marketing/

View File

@ -1,128 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Background Story of AppImage [Interview])
[#]: via: (https://itsfoss.com/appimage-interview/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The Background Story of AppImage [Interview]
======
As a Linux user, you might have come across [AppImages][1]. This is a portable packaging format that allows you to run an application on any Linux distribution.
[Using AppImage][2] is really simple. You just need to give it execute permission and double click to run it, like the .exe files in Windows. This solves a major problem in Linux as different kind of distributions have different kind of packaging formats. You cannot [install .deb files][3] (of Debian/Ubuntu) on [Fedora][4] and vice versa.
We talked to Simon, the developer of AppImage, about how and why he created this project. Read some of the interesting background story and insights Simon shares about AppImage.
### Interacting with Simon Peter, the creator of AppImage
![][5]
_**Its FOSS: Few people know about the person behind AppImage. How about sharing a little background information about yourself?**_
**Simon:** Hi, Im Simon Peter, based near Frankfurt in Germany. My background is in Economics and Business Administration, but Ive always been a tinkerer and hacker in my free time, and been working in tech ever since I graduated.
AppImage, though, is strictly a hobby which I enjoy working on in my spare time. I do a lot of my AppImage work while Im on a train going from here to there. Somehow I seem to be on the move all the time. Professionally, I work in the product management of a large telecommunications company.
_**Its FOSS: Why did you create AppImage?**_
**Simon:** The first computer I could get my hands on was a [Macintosh][6] in the late 80s. For me, this is the benchmark when it comes to simplicity and usability. When I started to experiment with Linux on the desktop, I always wished it was as elegant and simple to operate and gave me as much flexibility as the early Macs.
When I tried Linux for the first time in the late 90s, I had to go through a cumbersome process formatting and partitioning hard disks, installing stuff it took a lot of time and was really cumbersome. A couple of years later, I tried out a Linux Live CD-ROM. It was a complete game changer. You popped in the CD, booted the computer, and everything just worked, right out of the box. No installation, no configuration. The system was always in factory-new state whenever you rebooted the machine. Exactly how I liked it.
There was only one downside: You could not install additional applications on a read-only CD. Packages always insisted on writing in /usr, where the Live CD was not writeable. Thus, I asked myself: Why cant I just put applications wherever I want, like on a USB drive or a network share, as I am used from the Mac? How cool would it be if every application was just one single file that I could put wherever I want? And thus the idea for AppImage was born (back then under the name of “klik”).
Turns out that over time Live systems have become more capable, but I still like the simplicity and freedom that comes with the “one app = one file” idea. For example, I want to be in control of where stuff resides on my hard disks. I want to decide what to update or not to update and when. For most tasks I need a stable, rarely-changing operating system with the latest applications. To this day all I ever run are Live systems, because the operating system “just works” out of the box without any installation or configuration on my side, and every time I reboot the machine I have a “factory new”, known-good state.
_**Its FOSS: What challenges did you face in the past and what challenges are you facing right now?**_
**Simon:** People told me that the idea was nuts, and I had no clue how “things are done on Linux”. Just about when I was beginning to give in, I came across a video of [Linus Torvalds][7] of all people who I noticed was complaining about many of the same things that I always had felt were too complicated when it came to distributing applications for Linux. While I was watching his rant, I also noticed, hey, AppImage actually solves many of those issues. Some time later, Linus came across AppImage, and he apparently liked the idea. That made me think, maybe its not that stupid an idea as people had made me believe all the time up to that point.
Today, people tend to mention AppImage as “one of the new package formats” together with [Snap][8] and [Flatpak][9]. I think thats comparing apples to oranges. Not only is AppImage not “new” (its been around since well over a decade by now), but also it has very different objectives and design principles than the other systems. AppImage is all about single-file application bundles that can be “managed” by nothing else than a web browser and a file manager. Its meant for “mere morals”, end users, not system administrators. It needs no package manager, it needs no root rights, it needs nothing to be installed on the system. It gives complete freedom to application developers and users.
_**Its FOSS: AppImage is a “universal packaging system” and there you compete with Snap (backed by Ubuntu) and Flatpak (backed by Fedora). How do you plan to fight against these big corporates?**_
**Simon:** See? Thats what I mean. AppImage plays in an entirely different playing field.
AppImage wants to be what exe files or PortableApps are for Windows and what apps inside dmg files are on the Mac but better.
Besides, Snap (backed by [Canonical][10]) does not work out-of-the-box on Fedora, and Flatpak (backed by [Red Hat][11]) does not work out-of-the-box on Ubuntu. AppImages can run on either system, and many more, without the need to install anything.
_**Its FOSS: How do you see the adoption of AppImage? Are you happy with its growth?**_
**Simon:** As of early 2020, there are now around 1,000 official AppImages made by the respective application authors that are passing my compatibility tests and can run on the oldest still-supported Ubuntu LTS release, and hundreds more are being worked on as we speak. “Household name” applications like Inkscape, Kdenlive, KDevelop, LibreOffice, PrusaSlicer, Scribus, Slic3r, Ultimaker Cura (too many to name them all) are being distributed in AppImage format. This makes me very happy and I am always excited when I read about a new version being released on Twitter, and then am able to download and run the AppImage instantly, without having to wait for my Linux distribution to carry that new version, and without having to throw away the old (known-good) version just because I want to try out the new (bleeding edge) one.
The adoption of AppImage is especially strong for nightly and continuous builds. This is because the “one app = one file” concept of AppImage lends itself especially well to try-out software, where you keep multiple versions around for testing purposes, and never have to install anything into the running system. Worst thing that can happen with AppImage is that an application does not launch. In that case, file a bug, delete the file, done. Worst thing that can happen with distribution packages: complete system breakage…
_**Its FOSS: One major issue with AppImage is that not all the developers provide an easy way of updating the AppImage versions. Any suggestions for handling it?**_
**Simon:** AppImage has this concept of “binary delta updates”. Think of it as “diff for applications”. A new version of an application comes out, you download only the parts that have changed, and apply them to the old version. As a result, you get both the old and the new version and can keep them in parallel until you have determined that you dont need the old version any longer, and throw it away.
In general, I dont want to enforce anything with AppImage. Application authors are at liberty to control the whole experience. Up to now, application authors have to do some setup work to make AppImages with this update capability. That being said, I am convinced that if we make it easy enough for developers to get working binary delta updates “for free”, then many will offer them. To this end, I am currently working on a new set of tools written in Go that will set up updates almost automatically, and I hope this will significantly increase the percentage of AppImages that come with this capability.
_**Its FOSS: [Nitrux][12] is one of the rare distributions that relies heavily on AppImage. Or there any other such distributions? What can be done make AppImage more popular?**_
**Simon:** Linux distributions traditionally have thought of themselves as more than just the base operating system itself they also wanted to control application distribution. Now, as Apple and Microsoft are trying to get more control over application distribution on their desktop platforms, the trend is slowly reversing in Linux land where people are slowly beginning to understand that distributions could be much more polished if they focused on the base operating system and left the packaging of applications to the application authors.
To make AppImage more popular, I think users and application authors should continue to spread the word that upstream-provided AppImages are in many cases working better than distribution packages. With AppImage, you get a software stack where the application author had a chance to cherry-pick which versions of libraries work together, test and tune both functionality and performance. Who is surprised that the result tends to work better than a “random” combination of whatever versions happened to be in a Linux distribution at a certain, random point in time when a distribution release was put together?
[Desktop environments][13] could greatly increase usability, not only for AppImages, but also for any other kind of “side-loaded” applications that are not being installed. Just see how a desktop environments handles double-clicking on an executable file that is missing the executable bit. Some are doing a great job in this regard, like [Deepin Linux][14]. Stuff tends to “just work” there as it should.
Finally, I am currently working on a new set of tools written in Go which I hope will greatly simplify, and make yet more enjoyable, the production and consumption of AppImages. My goal here is to make things less complex for users, remove the need for configuration, make things “just work”, like on the early Macintoshes. Are there any Go developers out here interested to join the effort?
_**Its FOSS: I can see there is a website that lists available AppImage applications. Do you have plans to integrate it with other software managers on Linux or create a software manager for AppImage?**_
**Simon:** [appimage.github.io][15] lists AppImages that have passed my compatibility tests on the oldest still-supported Ubuntu LTS release. Projects creating app stores or software managers are free to use this data. Myself, I am not much interested in those things as I always download AppImages right from the respective projects download pages. My typical AppImage discovery goes like this:
1. Read on Twitter that PrusaSlicer has this cool new feature
2. Go to the PrusaSlicer GitHub project and read the release notes there
3. While there, download the AppImage and have it running a few seconds later
So for me personally, I have no need for app centers and app stores, but if people like them, they are free to put AppImages in there. I just never felt the need…
_**Its FOSS: What plans do you have for AppImage in future (new features that you plan to add)?**_
**Simon:** Simplify things even more, remove configuration options, make things “just work”. Reduce the number of GitHub projects needed to get the core AppImage experience for producing and consuming AppImages, including aspects like binary delta updates, sandboxing, etc. Improve usability.
_**Its FOSS: Does AppImage project makes money? What kind of support (if any) do you seek from the end users?**_
**Simon:** No, AppImage makes no money whatsoever.
Ill just request the readers to spread the word. Tell your favorite applications authors that youd like to see an AppImage, and why.
* * *
Team Its FOSS congratulates Simon for his hard work. Please feel free to convey any message and queries to him in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/appimage-interview/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://appimage.org/
[2]: https://itsfoss.com/use-appimage-linux/
[3]: https://itsfoss.com/install-deb-files-ubuntu/
[4]: https://getfedora.org/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appimage_simon_interview.jpg?ssl=1
[6]: https://en.wikipedia.org/wiki/Macintosh
[7]: https://itsfoss.com/linus-torvalds-facts/
[8]: https://itsfoss.com/install-snap-linux/
[9]: https://flatpak.org/
[10]: https://canonical.com/
[11]: https://www.redhat.com/en
[12]: https://itsfoss.com/nitrux-linux-overview/
[13]: https://itsfoss.com/best-linux-desktop-environments/
[14]: https://www.deepin.org/en/
[15]: https://appimage.github.io/

View File

@ -1,95 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Minicomputers and The Soul of a New Machine)
[#]: via: (https://opensource.com/article/20/2/minicomputers-and-soul-new-machine)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
Minicomputers and The Soul of a New Machine
======
The new season of Command Line Heroes begins with a story of increases
in memory, company politics, and a forgotten technology at the heart of
our computing history.
![Command Line Heroes season 4 episode 1 covers the rise of minicomputers][1]
The [Command Line Heroes podcast][2] is back, and this season it covers the machines that run all the programming languages [I covered last season][3]. As the podcast staff puts it:
"This season, we'll look at what happens when idealistic teams come together to build visionary machines. Machines made with leaps of faith and a lot of hard, often unrecognized, work in basements and stifling cubicles. Machines that brought teams together and changed us as a society in ways we could only dream of."
This first episode looks at the non-fiction book (and engineering classic), [_The Soul of a New Machine_][4], to look at a critical moment in computing history. It covers the transition from large, hulking mainframes to the intermediate step of the minicomputer, which will eventually lead us to the PC revolution that we're still living in the wake of.
### The rise of minicomputers
One of the most important machines on the path to modern machines, most of us have since forgotten: the minicomputer.
It was a crucial link in the evolution from mainframe to PC (aka microcomputer). It was also extremely important in the development of software that would fuel the PC revolution, chiefly the operating system. The PDP-7 and PDP-11—on which [UNIX was developed][5]—were examples of minicomputers. So was the machine at the heart of _The Soul of the New Machine_.
This episode takes us back to this important time in computing and explores this forgotten machine—both in terms of its hardware and software.
From 1963 to 1977, minicomputers were 12 to 16-bit machines from computing giants DEC ([PDP][6]) and rival upstart [Data General][7] ([Nova][8], [Eclipse][9]). But in October 1977, DEC unveiled the VAX 11/780, a 32-bit CPU built from transistor-transistor logic with a five megahertz cycle-time and 2 megabytes of memory. The VAX launched DEC [into second place][10] in the largest computer company in the world.
The jump from a 12-bit to a 32-bit CPU is a jump from 4,096 bytes to 4,294,967,296 bytes of data. That increase massively increased the potential for software to do complex tasks while drastically shrinking the size of the computer. And with a 32-bit CPU, the VAX was nearly as powerful as an IBM/360 mainframe—but much smaller and much, much less expensive.
[The episode][11] goes into the drama that unfolds as teams within Data General race to have the most marketable minicomputer while working through company politics and strong personalities.
### Revisiting _The Soul of a New Machine_
_The Soul of a New Machine_ was written in 1981 by Tracy Kidder, and chronicles a small group of engineers at the now-former tech company, Data General, as they attempt to compete with a rival internal group and create a 32-bit minicomputer as a skunkworks project known as "Eagle." For those okay with spoilers, the computer would eventually be known as the [Eclipse MV/8000][12].
Earlier this year, [Jessie Frazelle][13], of Docker, Go, and Kubernetes fame, and [Bryan Cantrill][14], known for [DTrace][15], Joyent, and many other technologies, publicly wrote about reading the non-fiction classic. As it's written, Cantrill mentioned the book to Frazelle, who read it and then wrote an enthusiastic [blog post][16] about the book. As Frazelle put it:
"Personally, I look back on the golden age of computers as the time when people were building the first personal computers in their garage. There is a certain whimsy of that time fueled with a mix of hard work and passion for building something crazy with a very small team. In today's age, at large companies, most engineers take jobs where they work on one teeny aspect of a machine or website or app. Sometimes they are not even aware of the larger goal or vision but just their own little world.
In the book, a small team built an entire machine… The team wasn't driven by power or greed, but by accomplishment and self-fulfillment. They put a part of themselves in the machine, therefore, producing a machine with a soul…The team was made up of programmers with the utmost expertise and experience and also with new programmers."
Inspired by Frazelle's reaction, Cantrill re-read it and wrote [a blog article][17] about it and writes this beautiful note:
"…_The Soul of a New Machine_ serves to remind us that the soul of what we build is, above all, shared — that we do not endeavor alone but rather with a group of like-minded individuals."
Frazelle's and Cantrill's reading of the book and blog [sparked a wave of people][18] exploring and talking about this text. While it remains on my book list, this dialogue-by-book-review is at the heart of the CLH season 4 as it explores the entire machine.
### Why did the minicomputer go the way of the Neanderthal?
As we all know, minicomputers are not a popular purchase in today's technology market. Minicomputers ended up being great technology for timesharing. The irony is that they unwittingly sealed their own fate. The Internet, which started off as ARPANET, was basically a new kind of timesharing. They were so good at timesharing that at one point, the DEC PDP 11 accounted for over 30% of the nodes on ARPANET. Minicomputers were powering their own demise.*
Minicomputers paved the way for smaller computers and for more and more people to have access to these powerful, society-changing machines. But I'm getting ahead of myself. Keep listening to the [new season of Command Line Heroes][2] to continue the story of machines in computing history.
* * *
What's your minicomputer story? I'd love to read them in the comments.
(There were, of course, other factors leading to the end of this era. Minicomputers were fighting at the low end of the market with the rise of microcomputers, while Unix systems continued to push into the midrange market. The rise of the Internet was perhaps its final blow.)
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/minicomputers-and-soul-new-machine
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command-line-heroes-minicomputers-s4-e1.png?itok=FRaff5i6 (Command Line Heroes season 4 episode 1 covers the rise of minicomputers)
[2]: https://www.redhat.com/en/command-line-heroes
[3]: https://opensource.com/article/19/6/command-line-heroes-python
[4]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[5]: https://opensource.com/19/9/command-line-heroes-bash
[6]: https://en.wikipedia.org/wiki/PDP
[7]: https://en.wikipedia.org/wiki/Data_General
[8]: https://en.wikipedia.org/wiki/Data_General_Nova
[9]: https://en.wikipedia.org/wiki/Data_General_Eclipse
[10]: http://www.old-computers.com/history/detail.asp?n=20&t=3
[11]: https://www.redhat.com/en/command-line-heroes/season-4/minicomputers
[12]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[13]: https://twitter.com/jessfraz?lang=en
[14]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[15]: https://en.wikipedia.org/wiki/DTrace
[16]: https://blog.jessfraz.com/post/new-golden-age-of-building-with-soul/
[17]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[18]: https://twitter.com/search?q=jessfraz%20soul%20new%20machine&src=typed_query&f=live

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a community of practice in 5 steps)
[#]: via: (https://opensource.com/article/20/2/building-community-practice-5-steps)
[#]: author: (Tracy Buckner https://opensource.com/users/tracyb)
Building a community of practice in 5 steps
======
A community of practice can kickstart innovation in your organization.
Here's how to build one—and ensure it thrives.
![Blocks for building][1]
In the [first part of this series][2], we defined community as a fundamental principle in open organizations, where people often define their roles, responsibilities, and affiliations through shared interests and passions, [not title, role, or position on an organizational chart][3]. Then, in the [second part of the series][4], we explored the many benefits communities of practice bring to open organizations—including fostering learning, encouraging collaboration, and offering an opportunity for creative problem-solving and innovation.
Now you know you'd like to _start_ a community of practice, but you may still be unsure _where_ to start. This article will help define your roadmap and build a plan for a successful community of practice—in five simple steps (summarized in Figure 1).
![][5]
### Step 1: Obtain executive sponsorship
While having a community manager focused on the day-to-day execution of community matters is important, an executive sponsor is also integral to the success of the community of practice. Typically, an executive sponsor will shoulder higher-level responsibilities, such as focusing on strategy and creating conditions for success (rather than implementation).
An executive sponsor can help ensure the community's goals are aligned with the overall strategy of the organization. This person can also communicate those goals and gather support for the community from other senior executives (potentially instrumental in securing financial support and resources for the community!).
Finding the right sponsor is important for the success of the program. An executive leader committed to fostering open culture, transparency, and collaboration will be very successful. Alternatively, you may wish to tap an executive focused on finding new ways to grow and reskill high-potential employees.
### Step 2: Determine mission and goals
Once you've established a vision for the community, you'll need to develop its mission statement. This is critical to your success because the mission begins explaining _how you'll achieve that vision_. Primarily, your community's mission should be to share knowledge, promote learning in a particular area, and align that work with organizational strategy. However, the mission statement may also include references to the audience that the community will serve.
Here's one example mission statement:
> _To identify and address needs within the cloud infrastructure space in support of the organizations mission of defining the next generation of open hybrid cloud._
After articulating a mission like this, you'll need to set specific goals for achieving it. The goals can be long- or short-term, but in either case, you'll need to provide a clear roadmap explaining to community members what the community is trying to achieve.
### Step 3: Build a core team
Building a core team is essential to the success of a community. In a typical community of practice—or "CoP," for short—you'll notice four main roles:
* CoP program manager
* CoP manager
* Core team members
* Members
The **CoP program manager** is the face of the community. This person is primarily responsible for supporting the managers and core teams by resolving questions, issues, and concerns. The program manager also guides new communities and evangelizes the communities of practice program inside the organization.
The **CoP manager** determines community strategy based on business and community needs. This person makes the latest news, content, and events available to community members and ensures that the CoP remains focused on its goals. This person also schedules regular meetings for members and shares other events that may be of interest to them.
The **CoP core team** is responsible for managing community collateral and best practices to meet the community's goals. The core team supports CoP manager(s) and assists with preparing and leading community meetings.
**Members** of a community attend meetings, share relevant content and best practices, and support the core team and manager(s) in reaching community goals.
### Step 4: Promote knowledge management
Communities of practice produce information—and members must be able to easily access and share that information. So it's important to develop a knowledge-management system for storing that information in a way that keeps it relevant and timely.
Communities of practice produce information—and members must be able to easily access and share that information. So it's important to develop a knowledge-management system for storing that information in a way that keeps it relevant and timely.
Over time, your community of practice will likely generate a lot of content. Some of that content may be duplicated, outdated or simply no longer relevant to the community. So it's important to periodically conduct a ROT Analysis of the content validating that the content is not **R**edundant, **O**utdated, or **T**rivial. Consider conducting a ROT analysis every six months or so, in order to keep the content fresh and relevant.
A number of different content management tools can assist with maintaining and displaying the content for community members. Some organizations use an intranet, while others prefer more robust content management such as [AO Docs][6] or [Drupal][7].
### Step 5: Engage in regular communication
The secret to success in maintaining a community of practice is regular communication and collaboration. Communities that speak with each other frequently and share knowledge, ideas, and best practices are most likely to remain intact. CoP managers should schedule regular meetings, meet-ups, and content creation sessions to ensure that members are engaged in the community. It is recommended to have at least a monthly meeting to maintain communication with the community members.
Chat/messaging apps are also a great tool for facilitating regular communication in communities of practice. These apps offer teams across the globe the ability to communicate in real-time, removing some collaboration boundaries. Members can pose questions and receive answers immediately, without the delay of sending and receiving emails. And should the questions arise again, most messaging apps also provide an easy search mechanism that can help members discover answers.
### Building your community
Remember: A community of practice is a cost-effective way to foster learning, encourage collaboration, and promote innovation in an organization. In [_The Open Organization_][8], Jim Whitehurst argues that "the beauty of an open organization is that it is not about pedaling harder, but about tapping into new sources of power both inside and outside to keep pace with all the fast-moving changes in your environment." Building communities of practice are the perfect way to do just that: stop pedaling harder and to tap into new sources of power within your organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/building-community-practice-5-steps
作者:[Tracy Buckner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tracyb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire (Blocks for building)
[2]: https://opensource.com/open-organization/19/11/what-is-community-practice
[3]: https://opensource.com/open-organization/resources/open-org-definition
[4]: https://opensource.com/open-organization/20/1/why-build-community-of-practice
[5]: https://opensource.com/sites/default/files/resize/images/open-org/comm_practice_5_steps-700x440.png
[6]: https://www.aodocs.com/
[7]: https://www.drupal.org/
[8]: https://opensource.com/open-organization/resources/what-open-organization

View File

@ -1,62 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A $399 device that translates brain signals into digital commands)
[#]: via: (https://www.networkworld.com/article/3526446/nextmind-wearable-device-translates-brain-signals-into-digital-commands.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
A $399 device that translates brain signals into digital commands
======
Startup NextMind is readying a $399 development kit for its brain-computer interface technology that enables users to interact, hands-free, with computers and VR/AR headsets.
MetamorWorks / Getty Images
Scientists have long envisioned brain-sensing technology that can translate thoughts into digital commands, eliminating the need for computer-input devices like a keyboard and mouse. One company is preparing to ship its latest contribution to the effort: a $399 development package for a noninvasive, AI-based, brain-computer interface.
The kit will let "users control anything in their digital world by using just their thoughts," [NextMind][1], a commercial spinoff of a cognitive neuroscience lab claims in a [press release][2].
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
The company says that its puck-like device inserts into a cap or headband and rests on the back of the head. The dry electrode-based receiver then grabs data from the electrical signals generated through neuron activity. It uses machine learning algorithms to convert that signal output into computer controls. The interaction could be with a computer, artificial-reality or virtual-reality headset, or [IoT][4] module.
"Imagine taking your phone to send a text message without ever touching the screen, without using Siri, just by using the speed and power of your thoughts," said NextMind founder Sid Kouider in a [video presentation][5] at Helsinki startup conference Slush in late 2019.
Advances in neuroscience are enabling real-time consciousness-decoding, without surgery or a doctor visit, according to Kouider.
One obstacle that has thwarted previous efforts is the human skull, which can act as a barrier to sensors. Its been difficult for scientists to differentiate indicators from noise, and some past efforts have only been able to discern basic things, such as whether or not a person is in a state of sleep or relaxation. New materials, better sensors, and more sophisticated algorithms and modeling have overcome some of those limitations. NextMinds noninvasive technology "translates the data in real time," Kouider says.
Essentially, what happens is that a persons eyes project an image of what they see onto the visual cortex in the back of the head, a bit like a projector. The NextMind device decodes the neural activity created as the object is viewed and sends that information, via an SDK, back as an input to a computer. So, by fixing ones gaze on an object, one selects that object. For example, a user could select a screen icon by glancing at it.
[][6]
"The demos were by no means perfect, but there was no doubt in my mind that the technology worked," [wrote VentureBeat writer Emil Protalinski][7], who tested a pre-release device in January.
Kouider has stated its the "intent" aspect of the technology thats most interesting; if a person focuses on one thing more than something else, the technology can decode the neural signals to capture that users intent.
"It really gives you a kind of sixth sense, where you can feel your brain in action, thanks to the feedback loop between your brain and a display," Kouider says in the Slush presentation.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526446/nextmind-wearable-device-translates-brain-signals-into-digital-commands.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.next-mind.com/
[2]: https://www.businesswire.com/news/home/20200105005107/en/CES-2020-It%E2%80%99s-Mind-Matter
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: http://www.networkworld.com/cms/article/3207535
[5]: https://youtu.be/RHuaNDSxH0o
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://venturebeat.com/2020/01/05/nextmind-is-building-a-real-time-brain-computer-interface-unveils-dev-kit-for-399/
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,54 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora at the Czech National Library of Technology)
[#]: via: (https://fedoramagazine.org/fedora-at-the-national-library-of-technology/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Fedora at the Czech National Library of Technology
======
![][1]
Where do you turn when you have a fleet of public workstations to manage? If youre the Czech [National Library of Technology][2] (NTK), you turn to Fedora. Located in Prague, the NTK is the Czech Republics largest science and technology library. As part of its public service mission, the NTK provides 150 workstations for public use.
In 2018, the NTK moved these workstations from Microsoft Windows to Fedora. In the [press release][3] announcing this change, Director Martin Svoboda said switching to Fedora will “reduce operating system support costs by about two-thirds.” The choice to use Fedora was easy, according to NTK Linux Engineer Miroslav Brabenec. “Our entire Linux infrastructure runs on RHEL or CentOS. So for desktop systems, Fedora was the obvious choice,” he told Fedora Magazine.
### User reception
Changing an operating system is always a little bit risky—it requires user training and outreach. Brabenec said that non-IT staff asked for training on the new system. Once they learned that the same (or compatible) software was available, they were fine.
The Librarys customers were on board right away. The Windows environment was based on thin client terminals, which were slow for intensive tasks like video playback and handling large office suite files. The only end-user education that the NTK needed to create was a [basic usage guide][4] and a desktop wallpaper that pointed to important UI elements.
![User guidance desktop wallpaper from the National Technology Library.][5]
Although Fedora provides development tools used by the Faculty of Information Technology at the Czech Technical University—and many of the NTKs workstation users are CTU students—most of the application usage is what you might expect of a general-purpose workstation. Firefox dominates the application usage, followed by the Evince PDF viewer,  and the LibreOffice suite.
### Updates
NTK first deployed the workstations with Fedora 28. They decided to skip Fedora 29 and upgraded to Fedora 30 in early June 2019. The process was simple, according to Brabenec. “We prepared configuration, put it into Ansible. Via AWX I restarted all systems to netboot, image with kickstart, after first boot called provisioning callback on AWX, everything automatically set up via Ansible.”
Initially, they had difficulties applying updates. Now they have a process for installing security updates daily. Each system is rebooted approximately every two weeks to make sure all of the updates get applied.
Although he isnt aware of any concrete plans for the future, Brabenec expects the NTK to continue using Fedora for public workstations. “Everyone is happy with it and I think that no one has a good reason to change it.”
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-at-the-national-library-of-technology/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/czech-techlib-816x345.png
[2]: https://www.techlib.cz/en/
[3]: https://www.techlib.cz/default/files/download/id/86431/tiskova-zprava-z-31-7-2018.pdf
[4]: https://www.techlib.cz/en/82993-public-computers
[5]: https://fedoramagazine.org/wp-content/uploads/2020/02/ntk-wallpaper-1024x576.jpeg

View File

@ -1,64 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Google Cloud moves to aid mainframe migration)
[#]: via: (https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Google Cloud moves to aid mainframe migration
======
Google bought Cornerstone Technology, whose technology facilitates moving mainframe applications to the cloud.
Thinkstock
Google Cloud this week bought a mainframe cloud-migration service firm Cornerstone Technology with an eye toward helping Big Iron customers move workloads to the private and public cloud. 
Google said the Cornerstone technology found in its [G4 platform][1]  will shape the foundation of its future mainframe-to-Google Cloud offerings and help mainframe customers modernize applications and infrastructure.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“Through the use of automated processes, Cornerstones tools can break down your Cobol, PL/1, or Assembler programs into services and then make them cloud native, such as within a managed, containerized environment” wrote Howard Weale, Googles director, Transformation Practice, in a [blog][3] about the buy.
“As the industry increasingly builds applications as a set of services, many customers want to break their mainframe monolith programs into either Java monoliths or Java microservices,” Weale stated. 
Google Clouds Cornerstone service will:
* Develop a migration roadmap where Google will assess a customers mainframe environment and create a roadmap to a modern services architecture.
* Convert any language to any other language and any database to any other database to prepare applications for modern environments.
* Automate the migration of workloads to the Google Cloud.
“Easy mainframe migration will go a long way as Google attracts large enterprises to its cloud,” said Matt Eastwood, senior vice president, Enterprise Infrastructure, Cloud, Developers and Alliances, IDC wrote in a statement.
The Cornerstone move is also part of Googles effort stay competitive in the face of mainframe-migration offerings from [Amazon Web Services][4], [IBM/RedHat][5] and [Microsoft][6].
While the idea of moving legacy applications off the mainframe might indeed be beneficial to a business, Gartner last year warned that such decisions should be taken very deliberately.
“The value gained by moving applications from the traditional enterprise platform onto the next bright, shiny thing rarely provides an improvement in the business process or the companys bottom line. A great deal of analysis must be performed and each cost accounted for,” Gartner stated in a report entitled *[_Considering Leaving Legacy IBM Platforms? Beware, as Cost Savings May Disappoint, While Risking Quality_][7]. * “Legacy platforms may seem old, outdated and due for replacement. Yet IBM and other vendors are continually integrating open-source tools to appeal to more developers while updating the hardware. Application leaders should reassess the capabilities and quality of these platforms before leaving them.”
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.cornerstone.nl/solutions/modernization
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://cloud.google.com/blog/topics/inside-google-cloud/helping-customers-migrate-their-mainframe-workloads-to-google-cloud
[4]: https://aws.amazon.com/blogs/enterprise-strategy/yes-you-should-modernize-your-mainframe-with-the-cloud/
[5]: https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html
[6]: https://azure.microsoft.com/en-us/migration/mainframe/
[7]: https://www.gartner.com/doc/reprints?id=1-6L80XQJ&ct=190429&st=sb
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,57 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Japanese firm announces potential 80TB hard drives)
[#]: via: (https://www.networkworld.com/article/3528211/japanese-firm-announces-potential-80tb-hard-drives.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Japanese firm announces potential 80TB hard drives
======
Using some very fancy physics for stacking electrons, Showa Denko K.K. plans to quadruple the top end of proposed capacity.
[geralt][1] [(CC0)][2]
Hard drive makers are staving off obsolescence to solid-state drives (SSDs) by offering capacities that are simply not feasible in an SSD. Seagate and Western Digital are both pushing to release 20TB hard disks in the next few years. A 20TB SSD might be doable but also cost more than a new car.
But Showa Denko K.K. of Japan has gone one further with the announcement of its next-generation of heat-assisted magnetic recording (HAMR) media for hard drives. The platters use all-new magnetic thin films to maximize their data density, with the goal of eventually enabling 70TB to 80TB hard drives in a 3.5-inch form factor.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
Showa Denko is the worlds largest independent maker of platters for hard drives, selling them to basically anyone left making hard drives not named Seagate and Western Digital. Those two make their own platters and are working on their own next-generation drives for release in the coming years.
While similar in concept, Seagate and Western Digital have chosen different solutions to the same problem. HAMR, championed by Seagate and Showa, works by temporarily heating the disk material during the write process so data can be written to a much smaller space, thus increasing capacity.
Western Digital supports a different technology called microwave-assisted magnetic recording (MAMR). It operates under a similar concept as HAMR but uses microwaves instead of heat to alter the drive platter. Seagate hopes to get to 48TB by 2023, while Western Digital is planning on releasing 18TB and 20TB drives this year.
Heat is never good for a piece of electrical equipment, and Showa Denkos platters for HAMR HDDs are made of a special composite alloy to tolerate temperature and reduce wear, not to mention increase density. A standard hard disk has a density of about 1.1TB per square inch. Showas drive platters have a density of 5-6TB per square inch.
The question is when they will be for sale, and who will use them. Fellow Japanese electronics giant Toshiba is expected to ship drives with Showa platters later this year. Seagate will be the first American company to adopt HAMR, with 20TB drives scheduled to ship in late 2020.
[][4]
Know whats scary? That still may not be enough. IDC predicts that our global datasphere the total of all of the digital data we create, consume, or capture will grow from a total of approximately 40 zettabytes of data in 2019 to 175 zettabytes total by 2025.
So even with the growth in hard-drive density, the growth in the global data pool everything from Oracle databases to Instagram photos may still mean deploying thousands upon thousands of hard drives across data centers.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3528211/japanese-firm-announces-potential-80tb-hard-drives.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://pixabay.com/en/data-data-loss-missing-data-process-2764823/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Discussing Past, Present and Future of FreeBSD Project)
[#]: via: (https://itsfoss.com/freebsd-interview-deb-goodkin/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Discussing Past, Present and Future of FreeBSD Project
======
[FreeBSD][1] is one of the most popular BSD distributions. It is used on desktop, servers and embedded devices for more than two decades.
We talked to Deb Goodkin, executive director, [FreeBSD Foundation][2] and discussed the past, present and future of FreeBSD project.
![][3]
**Its FOSS: FreeBSD has been in the scene for more than 25 years. How do you see the journey of FreeBSD? **
Over the years, weve seen a lot of innovation happening on and with FreeBSD. When the Foundation came into play 20 years ago, we were able to step in and help accelerate changes in the operating system. Over the years, weve increased our marketing support, to provide more advocacy and educational material, and to increase the awareness and use of FreeBSD.
In addition, weve increased our staff of software developers to allow us to quickly step in to fix bugs, review patches, implement workarounds to hardware issues, and implement new features and functionality. We have also increased the number of development projects we are funding to improve various areas of FreeBSD.
The history of stability and reliability, along with all the improvements and growth with FreeBSD, is making it a compelling choice for companies, universities, and individuals.
**Its FOSS: We know that Netflix uses FreeBSD extensively. What other companies or groups rely on FreeBSD? How do they contribute to BSD/FreeBSD (if they do at all)?**
Sonys Playstation 4 uses a modified version of FreeBSD as their operating system, Apple with their MacOS and iOS, NetApp in their ONTAP product, Juniper Networks  in [JunOS][4], Trivago in their backend infrastructure, University of Cambridge in security research including the Capability Hardware Enhanced RISC Instruction (CHERI) project, University of Notre Dame in their Engineering Department, Groupon in their datacenter, LA Times in their data center, as well as, other notable companies like Panasonic, and Nintendo.
I listed a variety of organizations to highlight the different FreeBSD use cases. Companies like [Netflix support FreeBSD][5] by supporting the Project financially, as well as, by upstreaming their code. Some of the companies, like Sony, take advantage of the BSD license and dont give back at all. 
![Deb Goodkin And Friend Promoting FreeBSD At Oscon][6]
**Its FOSS: Linux is ruling the servers and cloud computing. It seems that BSD is lagging in that field?**
I wouldnt characterize it as lagging, per se. Linux distributions do have a much higher market share than FreeBSD, but our strength falls in those two markets. FreeBSD does extremely well in these markets, because it provides a consistent and reliable foundation, and tends to just work. Known for having long term API stability, the user will integrate once and upgrade on their terms as both FreeBSD and their product evolves. 
**Its FOSS: Do you see the emergence of Linux as a threat to BSD? **
Sure, [there are so many Linux distributions][7] already, and most of them are supported by for profit companies. In fact, companies like Intel have many Linux developers on staff, so Linux is easily supported on their hardware.
However, thanks to the continuing education efforts and as our market share continues to grow, more developers will be available to support companies various FreeBSD use cases. 
**Its FOSS: Lets talk about desktop. Recently, the devs of Project Trident announced that they were moving away from FreeBSD as a base. They said that they made this decision because FreeBSD is slow to review updates and support for new hardware. For example, the most recent version of Telegram on FreeBSD is 9 releases behind the version available on Linux. How would you respond to their comments?**
There are quite a few FreeBSD distros for the desktop, with various focuses. The latest, is [FuryBSD][8], which coincidentally was started by iXsystems employees, but is independent of iXsystems, just like Project Trident is. In addition to FuryBSD, you may want to check out [NomadBSD][9] and [MidnightBSD][10].
Regarding supporting new hardware, weve stepped up our efforts to get FreeBSD working on more popular newer laptops. For example, the Foundation recently purchased a couple of the latest generation Lenovo X1 Carbon laptops and sponsored work to make sure that peripherals are supported out-of-the-box.
**Its FOSS: Why should a desktop user consider choosing FreeBSD?**
There are many reasons people should consider using FreeBSD on their desktop! Just to highlight a few, it has rock solid stability; high performance; supports [ZFS][11] to protect your data; a community that is friendly, helpful, and approachable; excellent documentation to easily find answers; over 30,000 open source software packages that are easy to install, allowing you to easily set up your environment without a lot of extras, and that includes many choices of popular GUIs, and it follows the POLA philosophy ([Principle of Least Astonishment][12]) which means, dont break things that work and upgrades are generally painless (even across major releases). 
**Its FOSS: Are there any plans to make it easier to install FreeBSD as a desktop system? The current focus seems to be on servers.**
The Foundation is supporting efforts to make sure FreeBSD works on the latest hardware and peripherals that appear in desktop systems, and will continue to support making FreeBSD easy to deploy, monitor, and configure to provide a great toolbox for building a desktop on top of it. That allows others to take as much or as little of FreeBSD to build a desktop version to produce a specific user experience they desire.
Like I mentioned above, there are other FreeBSD distributions that have taken these FreeBSD components and created their own desktop versions.
**Its FOSS: What are your plans/roadmap for FreeBSD in the coming years?**
The FreeBSD Foundations purpose is to support the FreeBSD Project. While were an entirely separate entity, we work closely with the Core Team and the community to help move the Project forward. The Foundation identifies key areas we should support in the coming years, based on input from users and what we are seeing in the industry. 
In 2019, we embarked on an even broader spectrum advocacy project to recruit new members throughout the world, while raising awareness about the benefits of learning FreeBSD. We are funding development projects including WiFi improvements, supporting OpenJDK, ZFS RAID-Z expansion, security, toolchain, performance improvements, and other features to keep FreeBSD innovative.
The FreeBSD Foundation will continue to host workshops and expand the amount of training opportunities and materials we provide. Finally, the [BSD Certification program][13] recently launched through Linux Professional Institute with greater availability. 
**Its FOSS: How can we bring more people to the BSD hold?**
We need more PR for FreeBSD and get more tech journalists like yourself to write about FreeBSD. We also need more trainings and classes that include FreeBSD in universities, trainings/workshops at technical conferences, more FreeBSD contributors giving talks at those conferences, more technical journalists, as well as, users writing about FreeBSD, and finally we need case studies from companies and organizations successfully using FreeBSD. It all takes having more resources! Were working on all of the above. 
**Its FOSS: Any message you would like to convey to our readers?**
Readers should consider getting involved with the largest and oldest democratically run open source project!
Whether you want to learn systems programming or how an operating system works, the small size of the operating system makes it a great platform to learn from. The size of the Project makes it easier for anyone to make a notable contribution, and there is a strong mentorship culture to support new contributors.
Being a democratically run project, allows your voice to be heard and work in the areas you are interested in. I hope your readers will go to [freebsd.org][1] and try it out themselves.
--------------------------------------------------------------------------------
via: https://itsfoss.com/freebsd-interview-deb-goodkin/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.freebsdfoundation.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/deb-goodkin-interview.png?ssl=1
[4]: https://www.juniper.net/us/en/products-services/nos/junos/
[5]: https://itsfoss.com/netflix-freebsd-cdn/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/FreeBSDFoundation_Deb_Goodkin_and_friend_promoting_FreeBSD_at_OSCON.jpg?ssl=1
[7]: https://itsfoss.com/best-linux-distributions/
[8]: https://itsfoss.com/furybsd/
[9]: https://itsfoss.com/nomadbsd/
[10]: https://itsfoss.com/midnightbsd-1-0-release/
[11]: https://itsfoss.com/what-is-zfs/
[12]: https://en.wikipedia.org/wiki/Principle_of_least_astonishment
[13]: https://www.lpi.org/our-certifications/bsd-overview

View File

@ -1,86 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Spilling over: How working openly with anxiety affects my team)
[#]: via: (https://opensource.com/open-organization/20/2/working-anxiety-team-performance)
[#]: author: (Sam Knuth https://opensource.com/users/samfw)
Spilling over: How working openly with anxiety affects my team
======
The team might interpret my behavior as evidence of exacting standards
or high expectations. But I know my anxiety does impact their
performance.
![Spider web on green background][1]
_Editor's note: This article is part of a series on working with mental health conditions. It details the author's personal experiences and is not meant to convey professional medical advice or guidance._
I was speaking with one of my direct reports recently about a discussion we'd had with the broader team earlier in the week. In that discussion I had expressed some frustration that we weren't as far along on a particular project as I thought we needed to be.
"I knew you were disappointed," my staff member said, recalling the meeting, "like you wanted us to be doing something that we weren't doing, or that what we were doing wasn't good enough."
They paused for a moment and then said, "Sam, I get this feeling from you all the time."
That comment struck me pretty profoundly. To my team member, perhaps the scenario above is a reflection of my exacting standards, my high expectations, or my desire to see continual improvement with the team. Those are all reasonable explanations for my behavior.
But there's another ingredient my team may not be aware of: my anxiety.
### It's not just personal
[Previously I discussed][2] how my anxiety, beginning with a worry that I'm not "doing enough," can fuel my proactive tendencies, leading to higher performance at work. What I hadn't considered is my team can interpret my _personal_ feeling of not doing enough as an indicator that _they_ are not doing enough.
Living with anxiety and other mental health conditions feels personal. It's not something I've talked about at work. It's not something I generally discuss, and it's something I've always felt I was coping with as a private part of my life.
Living with anxiety and other mental health conditions feels personal. It's not something I've talked about at work. It's not something I generally discuss, and it's something I've always felt I was coping with as a private part of my life.
But that discussion with my staff member made me realize that I can't contain my personality so neatly. In truth, my anxiety spills over to my team in ways I hadn't considered. I don't know if anxiety can "rub off" on someone, but when I try to think about it objectively, I imagine someone with anxiety would feel it heightened if they worked for me (perhaps my anxiety would feed theirs), and one without anxiety might feel I have unreasonable or unmeetable expectations.
As a leader—even in an open organization, where [hierarchy is not the most important factor][3] in determining influence—I'm aware that I am in a position of a certain amount of power. People observe my behavior more closely than I realize; how I treat people has a big impact on them, the broader organization, and ultimately the success of the team.
I try hard to treat people with respect, [to assume positive intent][4], to give people the room to do their work in the way they see fit. But, nonetheless, do my team members feel the kind of judgment from me that I continually impose on myself?
### Counting our achievements
What feels "good" to me (what calms my anxiety) is to focus _not_ on what we _have achieved_, but on what we _have yet to do_; not to _celebrate success_, but to _find areas for improvement_. So, when we hit a big milestone, my gut reaction is to say, "Great, now that we've come this far, what else can we do to have a bigger impact?" Stopping and celebrating the team's accomplishments before moving on to the next challenge feels foriegn to me. It also makes me anxious that we are pausing in our progress.
This is [what I've called an anxiety-driven performance loop][2]. The sense of accomplishment (and the external acknowledgement) after an achievement fuels a desire to immediately start looking for the next challenge. To some extent, this performance loop keeps me productive—even though it has other consequences, too.
What I want to _avoid_ is transferring my anxiety to my team members. I don't want them to feel that I am continually saying, "What have you done for me lately?" even though that is how I feel the world is looking at me. That's an aspect of what I've called [an anxiety inaction loop][5].
I try hard to treat people with respect, to assume positive intent, to give people the room to do their work in the way they see fit. But, nonetheless, do my team members feel the kind of judgment from me that I continually impose on myself?
At a fundamental level, I believe work is never done, that there is always another challenge to explore, other ways to have a larger impact. Leaders need to inspire and motivate us to embrace that reality as an exciting opportunity rather than an endless drudge or a source of continual worry.
As a leader who suffers from anxiety, this is more challenging to do in practice than it is to understand intellectually.
While this is an area of continual work for me, I've received some good advice on how to shield my colleagues from my own anxiety-driven loops, like:
* If celebrating success and acknowledging achievement doesn't come naturally for you, build it into the plan from the start. Ensure you have the celebration of accomplishment accounted for from the beginning of a project. This can help reduce the "what have you done for me lately?" impulse that comes from moving quickly to the next challenge without pausing to acknowledge achievements.
* Work with another team member on acknowledgment and celebration efforts. Others might have different ideas on how to do this effectively, and may also enjoy the process. Giving this responsibility to someone else can help ensure it isn't lost.
* Practice compassion, gratitude, and empathy. This may not come naturally and may take some effort. Putting yourself in someone else's shoes, thinking about their perspective, thanking people for what they have done, and understanding their challenges can go a long way in shifting your own perspective.
* If you find yourself judging others, ask yourself, "Is this useful in terms of what I want or need from this situation?" That is, is carrying judgment going to help you accomplish your goal? Most likely, the answer is no. And, in fact, it may have the opposite effect!
The above tips have been helpful for me. But the goal of this series hasn't been to provide solutions but rather to share my experiences and to use writing to explore my own tendencies and the impact they have on myself and others. I believe that acknowledging and sharing our personal challenges can reduce the [stigma][6] associated with mental illness, create the space needed to start exploring solutions, and to create environments that are more positive and invigorating to work in.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/working-anxiety-team-performance
作者:[Sam Knuth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/samfw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-web-internet.png?itok=UQ0zMNJ3 (Spider web on green background)
[2]: https://opensource.com/open-organization/20/1/leading-openly-anxiety
[3]: https://opensource.com/open-organization/resources/open-org-definition
[4]: https://opensource.com/article/17/2/what-happens-when-we-just-assume-positive-intent
[5]: https://opensource.com/open-organization/20/2/working-anxiety-inaction-loop
[6]: https://www.bloomberg.com/news/articles/2019-11-13/mental-health-is-still-a-don-t-ask-don-t-tell-subject-at-work

View File

@ -1,74 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How we decide when to release Fedora)
[#]: via: (https://fedoramagazine.org/how-we-decide-when-to-release-fedora/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
How we decide when to release Fedora
======
![][1]
Open source projects can use a variety of different models for deciding when to put out a release. Some projects release on a set schedule. Others decide on what the next release should contain and release whenever that is ready. Some just wake up one day and decide its time to release. And other projects go for a rolling release model, avoiding the question entirely.
For Fedora, we go with a [schedule-based approach][2]. Releasing twice a year means we can give our contributors time to implement large changes while still keeping on the leading edge. Targeting releases for the end of April and the end of October gives everyone predictability: contributors, users, upstreams, and downstreams.
But its not enough to release whatevers ready on the scheduled date. We want to make sure that were releasing _quality_ software. Over the years, the Fedora community has developed a set of processes to help ensure we can meet both our time and and quality targets.
### Changes process
Meeting our goals starts months before the release. Contributors propose changes through our [Changes process][3], which ensures that the community has a chance to provide input and be aware of impacts. For changes with a broad impact (called “system-wide changes”), we require a contingency plan that describes how to back out the change if its broken or wont be ready in time. In addition, the change process includes providing steps for testing. This helps make sure we can properly verify the results of a change.
Change proposals are due 2-3 months before the beta release date. This gives the community time to evaluate the impact of the change and make adjustments necessary. For example, a new compiler release might require other package maintainers to fix bugs exposed by the new compiler or to make changes that take advantage of new capabilities.
A few weeks before the beta and final releases, we enter a [code freeze][4]. This ensures a stable target for testing. Bugs identified as blockers and non-blocking bugs that are granted a freeze exception are updated in the repo, but everything else must wait. The freeze lasts until the release.
### Blocker and freeze exception process
In a project as large as Fedora, its impossible to test every possible combination of packages and configurations. So we have a set of test cases that we run to make sure the key features are covered.
As much as wed like to ship with zero bugs, if we waited until we reached that state, thered never be another Fedora release again. Instead, weve defined release criteria that define what bugs can [block the release][5]. We have [basic release criteria][6] that apply to all release milestones, and then separate, cumulative criteria for [beta][7] and [final][8] releases. With beta releases, were generally a little more forgiving of rough edges. For a final release, it needs to pass all of betas criteria, plus some more that help make it a better user experience.
The week before a scheduled release, we hold a “[go/no go meeting][9]“. During this meeting, the QA team, release engineering team, and the Fedora Engineering Steering Committee (FESCo) decide whether or not we will ship the release. As part of the decision process, we conduct a final review of blocker bugs. If any accepted blockers remain, we push the release back to a later date.
Some bugs arent severe enough to block the release, but we still would like to get them fixed before the release. This is particularly true of bugs that affect the live image experience. In that case, we grant an [exception for updates that fix those bugs][10].
### How you can help
In all my years as a Fedora contributor, Ive never heard the QA team say “we dont need any more help.” Contributing to the pre-release testing processes can be a great way to make your first Fedora contribution.
The Blocker Review meetings happen most Mondays in #fedora-blocker-review on IRC. All members of the Fedora community are welcome to participate in the discussion and voting. One particularly useful contribution is to look at the [proposed blockers][11] and see if you can reproduce them. Knowing if a bug is widespread or not is important to the blocker decision.
In addition, the QA team conducts test days and test weeks focused on various parts of the distribution: the kernel, GNOME, etc. Test days are announced on [Fedora Magazine][12].
There are plenty of other ways to contribute to the QA process. The Fedora wiki has a [list of tasks and how to contact the QA team][13]. The Fedora 32 Beta release is a few weeks away, so nows a great time to get started!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-we-decide-when-to-release-fedora/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/releasing-fedora-1-816x345.png
[2]: https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Development_Schedule
[3]: https://docs.fedoraproject.org/en-US/program_management/changes_policy/
[4]: https://fedoraproject.org/wiki/Milestone_freezes
[5]: https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process
[6]: https://fedoraproject.org/wiki/Basic_Release_Criteria
[7]: https://fedoraproject.org/wiki/Fedora_32_Beta_Release_Criteria
[8]: https://fedoraproject.org/wiki/Fedora_32_Final_Release_Criteria
[9]: https://fedoraproject.org/wiki/Go_No_Go_Meeting
[10]: https://fedoraproject.org/wiki/QA:SOP_freeze_exception_bug_process
[11]: https://qa.fedoraproject.org/blockerbugs/
[12]: https://fedoramagazine.org/tag/test-day/
[13]: https://fedoraproject.org/wiki/QA/Join

View File

@ -1,86 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mirantis: Balancing Open Source with Guardrails)
[#]: via: (https://www.linux.com/articles/mirantis-balancing-open-source-with-guardrails/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Mirantis: Balancing Open Source with Guardrails
======
[![][1]][2]
[![][1]][2]
Mirantis, an open infrastructure company that rose to popularity with its OpenStack offering, is now moving into the Kubernetes space very aggressively. [Last year, the company acquired the Docker Enterprise business from Docker.][3] [This week, it announced that they were hiring the Kubernetes experts from the Finnish company Kontena a][4]nd established a Mirantis office in Finland, expanding the companys footprint in Europe. Mirantis already has a significant presence in Europe due to large customers such as Bosch and [Volkswagen][5].
The Kontena team primarily focused on two technologies. One was a Kubernetes distro called [Pharos][6], which differentiated itself from other distributions by specializing in addressing life cycle management challenges. They had developed some unique capabilities for deployment and for updating Kubernetes itself.
The second product by Kontena is [Lens][6]. “Its like a Kubernetes dashboard on steroids. In addition to offering the standard dashboard functions, it went multiple steps further by providing a terminal for command line interfacing to nodes and containers, and additional real-time insights, role-based access controls and a number of other capabilities that are currently absent from the Kubernetes dashboard,” said [Dave Van Everen][7], SVP of Marketing at Mirantis.
Everything that Kontena does is open source. These open source projects are already used by hundreds of organizations around the world. “They have a proven track record of contributing valuable technology pieces to the Kubernetes ecosystem, and we saw an opportunity to bring the team on board and capitalized on that opportunity as quickly as we could,” said Van Everen.
Mirantis will integrate many of the technology concepts and benefits from Pharos into its Docker Enterprise offering. With Kontena engineers on board, Mirantis expects to incorporate the best of what Kontena offered into its commercially supported Docker Enterprise and [Kubernetes][8] technology.
With this acquisition, Mirantis has hinted at a very aggressive 2020. The company is weeks away from launching the first Docker Enterprise release since the acquisition. The release brings many new capabilities on top of Docker Enterprise 3.0. The company is working on merging the [Mirantis KaaS][9] capabilities with Docker Enterprise. “We will add new capabilities, including multi-cluster management and continuous automated updates to the Kubernetes thats already within Docker Enterprise,” said Van Everen.
**What is Mirantis today?**
Mirantis started out as a pure-play OpenStack company, but as the market dynamics changed, the company adjusted its own positioning and bet on CD platforms like Spinnaker and container orchestration technologies like Kubernetes. So, what are they focusing on today?
Van Everen said that Mirantis is definitely embracing Kubernetes as the open standard used by enterprises for modern applications. Kubernetes itself has a massive ecosystem of technologies that a customer needs to leverage. “When we speak about Kubernetes, we speak about full-stack Kubernetes, which includes that ecosystem consisting of a couple dozen components in a typical cluster deployment. Our job as a trusted partner in helping our customers accelerate their path to modern applications is to streamline and automate all of the infrastructure and DevOps tooling supporting their app development lifecycle,” san Van Everen.
In a nutshell, Mirantis is making it easier for customers to use Kubernetes.
Over the years, [Mirantis][10] has gained expertise in IaaS with the work they did on OpenStack. “All of that plays a role in helping companies move faster and become more agile as theyre modernizing their applications. We apply many of those same strengths to the Kubernetes ecosystem,” he said.
Mirantis is also building expertise in continuous delivery platforms like [Argo CD][11] and is offering customers a spectrum of professional services around application modernization, from writing code that is based in microservices architecture, to integrating CI/CD pipelines and modernizing the tooling for CI/CD to better support cloud-native patterns. By supporting Kubernetes technology with app modernization services, Mirantis is helping customers wherever they are in their digital transformation and cloud-native journey.
“All of those things that our services team provides are complementary to the technology. Thats a unique value that only Mirantis can provide to the market, where we can couple open source technologies with strong services to ensure that companies really get the most out of that open source technology and fulfill their ultimate goal, which is to accelerate their pace of innovation,” Van Everen said.
Container networking is a critical piece of the cloud-native world and Mirantis already has expertise in the area, thanks to their work on OpenStack. The company recently joined the Linux Foundations [LF Networking project][12] which is home to [Tungsten Fabric][13] (formerly known as OpenContrail), a technology that Mirantis uses for its [OpenStack][14] offerings.
He explains, “While we use Calico for the container networking, Tungsten Fabric would be an important part of the underlying networking supporting Kubernetes deployments. Staying true to our heritage, we want to be involved in the open community and have both a voice and a stake in the direction the communities are moving in.”
[As for the ongoing debate or controversy around two competing service mesh technologies Istio and Linkerd,][15] the company has made its bet on Istio. A few months ago, Mirantis announced a training program for Istio, which was bundled with Mirantis KaaS offerings.
“We include Istio as a service mesh by default in child clusters under Mirantis KaaS management. Itll be used as an ingress with Docker Enterprise initially. Moving forward, were still looking at how to best deploy it in a service mesh configuration by default and provide a configurable but still functional default deployment for Istio as a service mesh,” said Van Everen.
It might seem like Mirantis is latching on to the latest hot technologies like OpenStack, [Spinnaker][16], Docker Enterprise, Kubernetes, and Istio to see what sticks. In reality, there is a method to it: the company is going where its customers are going, with the technologies that customers are using. Its a fine balancing act.
“Thats the type of technology challenge that Mirantis embraces. We are open source experts and continue to provide the greatest flexibility and choice in our industry, but we do it in such a way that there are guardrails in place so that companies dont end up having something thats overly complex and unmanageable, or configured incorrectly,” he concluded.
Note: Cross posted to [TFIR][17]
--------------------------------------------------------------------------------
via: https://www.linux.com/articles/mirantis-balancing-open-source-with-guardrails/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/wp-content/uploads/2019/09/world-network-1068x713.jpg (world-network)
[2]: https://www.linux.com/wp-content/uploads/2019/09/world-network.jpg
[3]: https://www.youtube.com/watch?v=cBOrVKuomcU&feature=emb_title
[4]: https://containerjournal.com/topics/container-ecosystems/mirantis-acquires-kubernetes-assets-from-kontena/
[5]: https://www.mirantis.com/company/press-center/company-news/volkswagen-group-selects-mirantis-openstack-software-next-generation-cloud/
[6]: https://github.com/kontena
[7]: https://twitter.com/davidvaneveren?lang=en
[8]: https://kubernetes.io/
[9]: https://www.tfir.io/mirantis-launches-kaas-across-bare-metal-public-and-private-clouds/
[10]: https://www.mirantis.com/
[11]: https://argoproj.github.io/argo-cd/
[12]: https://www.linuxfoundation.org/projects/networking/
[13]: https://tungsten.io/
[14]: https://www.openstack.org/
[15]: https://twitter.com/hashtag/istio?lang=en
[16]: https://www.tfir.io/?s=spinnaker
[17]: https://www.tfir.io/mirantis-balancing-open-source-with-guardrails/

View File

@ -1,91 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Melissa Di Donato Is Going To Reinvent SUSE)
[#]: via: (https://www.linux.com/articles/how-melissa-di-donato-is-going-to-reinvent-suse/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
How Melissa Di Donato Is Going To Reinvent SUSE
======
[![][1]][2]
[![][1]][2]
SUSE is one of the oldest open source companies and the first to market Linux for the enterprise. Even though it has undergone several acquisitions and a merger, it remains a strong player in the business. It has maintained its integrity and core values around open source. It continues to rely on its tried-and-tested Linux business and European markets, and generally shies away from making big moves taking big risks.
Until now.
[SUSE appointed Melissa Di Donato as its first female CEO][3]. She is making some serious changes to the company, from building a diverse and inclusive culture to betting on emerging technologies and taking risks.
Soon after taking the helm last year, Di Donato spent the first few months traveling around the globe to meet SUSE teams and customers and get a better sense of the perception of the market about the company.
Just like [Red Hat CEO Jim Whitehurst,][4] Di Donato didnt come to the company from an open source background. She had spent the last 25 years of her career as a SUSE customer, so she did have an outsiders perspective of the company.
“I am not interested in what SUSE was when I joined. I am more interested in what we want to become,” she said.
**Innovating for customers**
After her 100-day global tour, Di Donato had a much clearer picture of the company. She found that more than 80% of SUSE customers were still traditionalists, i.e., companies such as Walgreens and Daimler who have been around for a long time.
Over the years, these customers brought technologies into their environments to simplify things, but they ended up creating more complexities. Its a tall order to weave through the legacy technical debt they incurred and embrace emerging technologies such as Cloud Foundry, Kubernetes and so on.
These customers want to modernize their legacy environments and workloads, but they cant do that with the complex environments they have built. They cant iterate faster; they cant respond to new opportunities and new competitors faster.
They want to leverage cloud-native technologies like Kubernetes and containers, but it is overwhelming to evaluate technologies that are emerging at such a rapid pace. Which ones are just shiny new things and which ones do they really need them to accelerate their business goals?
“We have to help our customers simplify their infrastructure and environment so that they can start modernizing it and start leveraging new technologies,” Di Donato said.
While SUSE will continue to focus on core Linux OS, it will also invest in the next generation of Linux. It has been working on technologies like Kubic and MicroOS that change the way Linux is installed, managed, and operated.
She explains, “We are going to reinvent the way operating systems are used. We are going to make sure that we provide solutions that help our customers optimize their environment, automate components to help the applications run in a much more efficient and modern way. Thats what SUSE is going to be — an innovator. Were not there quite yet, but thats our focus.”.
**Evolving the company **
Historically, SUSE has been a fairly conservative company compared to other companies like Red Hat, which has been embracing emerging technologies at a much faster rate than any other open source software vendor.
“We have not been in a place where weve been considered the risk taker. Were the steady, stable provider of the most comprehensive unbreakable solutions in the market,” Di Donato admitted. “But we need to take that strong foundation and begin to become a bit of a risk taker, and begin to become very innovative.”
She is also gunning for explosive growth. “Were going to double in size by 2023. We have to go from just under half-a-billion in revenue to a billion.”
To achieve that, SUSE will be looking at both organic and inorganic growth, including acquisition of companies, talent and technologies. “We are going to be the default choice for innovation. We are going to be the default choice for highly innovative technologies that really change the landscape,” Di Donato said.
**Refining the brand**
Aside from making significant changes within the company, Di Donato is working on refining the SUSE brand. She hired seasoned Ivo Totev to lead Product and Marketing and showcase the companys differentiation.
“Were trying to get into the psychology of reinventing the brand,” Di Donato said. Her goal is to allocate 30-40% of SUSEs total revenue outside of the core Linux OS towards emerging markets and develop the technologies that theyve already built.
SUSE _is_ home to many innovative technologies that are being used by other open source communities, even its competitors. It just didnt market them the way Red Hat would market its technologies and projects. Even though SUSE started before Red Hat, the latter has much more visibility around the globe.
“Its a matter of getting the word out. We build things, but we dont talk about it or do anything about it. We actually have to put a package around it and start selling it so people can see who we are and what value we bring to them.”
In Di Donatos eyes, though, good marketing isnt everything. She argued that customers are going to demand flexibility and they are going to demand innovation that is not tied to the stack of a company. “Red Hat has a very locked-in stack that doesnt allow them to be agnostic at all.”
Its quite true that unlike Red Hat, SUSE is known as an “open open-source company”, one that believes in working with partners to create an ecosystem around open source, instead of creating a tightly integrated stack that locks everyone out.
She believes that eventually, customers would want the freedom and flexibility of picking and choosing the components they want in their stack.
**Conclusion**
Expect some big moves from SUSE in the near future. Less than a year into the company, new CEO Di Donato has developed a very clear vision. “Were going to build this company based on an innovative and agile mindset. Were not going to give up the stability and the quality of our core. What we are going to do is surround the core with really innovative thought-leading technologies that are going to set us apart from our competition… You are going to feel and experience a very different sense of excitement because were going to be talking much, much louder than weve ever talked about it before.”
--------------------------------------------------------------------------------
via: https://www.linux.com/articles/how-melissa-di-donato-is-going-to-reinvent-suse/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/wp-content/uploads/2020/02/Melissa-Di-Donato-1068x763.jpg (Melissa Di Donato)
[2]: https://www.linux.com/wp-content/uploads/2020/02/Melissa-Di-Donato-scaled.jpg
[3]: https://www.tfir.io/suse-gets-its-first-female-ceo-melissa-di-donato/
[4]: https://www.cio.com/article/3090140/jim-whitehurst-if-its-important-to-the-linux-community-its-important-to-red-hat.html

View File

@ -1,59 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel takes aim at Huawei 5G market presence)
[#]: via: (https://www.networkworld.com/article/3529354/intel-takes-aim-at-huawei-5g-market-presence.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel takes aim at Huawei 5G market presence
======
Intel has ambitious plans to grab the 5G base station marketplace, which is still in its infancy.
Christopher Hebert/IDG
Intel on Monday introduced a raft of new processors, and while updates to the Xeon Scalable lineup led the parade, the real news is Intel's efforts to go after the embattled Huawei Technologies in the 5G market.
Intel unveiled its first ever 5G integrated chip platform, the Atom P5900, for use in base stations. Navin Shenoy, executive vice president and general manager of the data platforms group at Intel, said the product is designed for 5G's high bandwidth and low latency and combines compute, 100Gb performance and acceleration into a single SoC.
"It delivers a performance punch in packet security throughput, and improved packet balancing throughput versus using software alone," Shenoy said in the video accompanying the announcement. Intel claims the dynamic load balancer native to the Atom P5900 chip is 3.7 times more efficient at packet balancing throughput than software alone.
Shenoy said Ericsson, Nokia, and ZTE have announced that they will use the Atom P5900 in their base stations. Intel hopes to be the market leader for silicon base station chips by 2021, aiming for 40% of the market and six million 5G base stations by 2024.
That's pretty aggressive, but the 5G fields are very green and there is plenty of room for growth. Despite all the mobile provider's commercials boasting of 5G availability, the fact is, true 5G phones are only just coming to market now, and the number of base stations is minimal. 5G has a long ramp ahead.
The Atom P5900 puts Intel in competition with China's Huawei, which U.S. federal authorities have repeatedly labeled a security risk. Huawei has been barred from several nations, including the U.S., England, Japan, and Australia. However, Huawei also said it has secured more than 90 commercial 5G contracts globally.
Until now, Ericsson and Nokia have asked developers such as Broadcom to help develop base station chips, while Samsung designs and manufactures its own 5G base station chips.
**[ [Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][1] ]**
### Second Generation Xeon Scalable processors
Intel's latest launch also includes 18 2nd Gen Xeon Scalable processors; one is branded Bronze, four are Silver, and 13 are Gold. The upgraded lineup targets the entry-level to medium-range market, leaving the Platinum to duke it out with the high end AMD Epyc processors.
The new processors range from 8 to 28 cores and include a variety of clock speeds that go down as the core count goes up. They have TDP ratings that range from 85 watts for the eight-core, 1.9Ghz Bronze 3206R to 205 watts for the 28-core, 2.7Ghz Gold 6258R. (Intel and other chip makers measure a processor's power draw with a specification called thermal design power, or TDP.) **
**
While most of the chips are meant for standard data center use, some processors  including the Xeon Gold 6200U, Silver 4200R, Sliver 4210T and Bronze 3200R are specifically meant for single-socket, entry-level servers, as well as edge, networking and IoT uses.
Intel also introduced the Intel Ethernet 700 Network Adapter, which comes with hardware-enhanced precision timing designed specifically for 5G and other scenarios with very low latency and timing requirements.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3529354/intel-takes-aim-at-huawei-5g-market-presence.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (No More WhatsApp! The EU Commission Switches To Signal For Internal Communication)
[#]: via: (https://itsfoss.com/eu-commission-switches-to-signal/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
No More WhatsApp! The EU Commission Switches To Signal For Internal Communication
======
_**In a move to improve the cyber-security, EU has recommended its staff to use open source secure messaging app Signal instead of the popular apps like WhatsApp.**_
[Signal is an open source secure messaging application][1] with end to end encryption. It is praised by the likes of [Edward Snowden][2] and other privacy activists, journalists and researchers. Weve recently covered it in our [open source app of the week][3] series.
[Signal][4] is in news for good reasons. The European Union Commissions have instructed its staff to use Signal for public instant messaging.
This is part of EU”s new cybersecurity strategy. There has been cases of data leaks and hacking against EU diplomats and thus policy is being put in place to encourage better security practices.
### Governments recommending open source technology is a good sign
![][5]
No matter what the reason is, Government bodies recommending open-source services for better security is definitely a good thing for the open-source community in general.
[Politico][6] originally reported this by mentioning that the EU instructed its staff to use Signal as the recommended public instant messaging app:
> The instruction appeared on internal messaging boards in early February, notifying employees that “Signal has been selected as the recommended application for public instant messaging.”
The report also mentioned the potential advantage of Signal (which is why the EU is considering using it):
> “Its like Facebooks WhatsApp and Apples iMessage but its based on an encryption protocol thats very innovative,” said Bart Preneel, cryptography expert at the University of Leuven. “Because its open-source, you can check whats happening under the hood,” he added.
Even though they just want to secure their communication or want to prevent high-profile leaks, switching to an open-source solution instead of [WhatsApp][7] sounds good to me.
### Signal gets a deserving promotion
Even though Signal is a centralized solution that requires a phone number as of now, it is still a no-nonsense open-source messaging app that you may trust.
Privacy enthusiasts already know a lot of services (or alternatives) to keep up with the latest security and privacy threats. However, with the EU Commission recommending it to its staff, Signal will get an indirect promotion for common mobile and desktop users.
### Wrapping Up
It is still an irony that some Government bodies hate encrypted solutions while opting to use them for their own requirement.
Nevertheless, it is good progress for open-source services and tech, in general, is recommended as a secure alternative.
What do you think about the EUs decision on switching to the Signal app for its internal communication? Feel free to let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/eu-commission-switches-to-signal/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/signal-messaging-app/
[2]: https://en.wikipedia.org/wiki/Edward_Snowden
[3]: https://itsfoss.com/tag/app-of-the-week/
[4]: https://www.signal.org/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/Signal-eu.jpg?ssl=1
[6]: https://www.politico.eu/pro/eu-commission-to-staff-switch-to-signal-messaging-app/
[7]: https://www.whatsapp.com/

View File

@ -1,120 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Solus Linux Creator Ikey Doherty Enters the Game Dev Business With a New Open Source Game Engine)
[#]: via: (https://itsfoss.com/ikey-doherty-serpent-interview/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Solus Linux Creator Ikey Doherty Enters the Game Dev Business With a New Open Source Game Engine
======
[Ikey Doherty][1], the creator and former lead dev of [Solus][2], is back with a new project. His new company, [Lispy Snake, Ltd][3], uses open source technology to create games, with a focus on Linux support.
I asked Ikey some questions about his new project. Here are his answers.
![][4]
_**Its FOSS: What made you decide to get into game development?**_
**Ikey**: Honestly I would have to say a respect for older games. The creativity that came from so much limitation is frankly amazing. If you think of how limited the NES or C64 were, (or indeed my [Amstrad CPC][5]) yet how much joy people experienced from those platforms. Its a buzz I cant avoid. Even though were a long way now from that world, I still look to model that technical excellence and creativity as best I can. Im a sucker for good stories.
_**Its FOSS: There are already several open-source game engines. Why did you decide to make your own? What is Serpents killer feature?**_
**Ikey**: There are a good number of open and closed source ones, each with a great set of features. However, Im a pretty old school developer and there is nothing I hate more than an IDE or drag n drop codeless environment. I simply wanted to create indie games with the least fuss possible and using a framework where I didnt have to compromise. Once you get to must work nicely on Linux and be open source youre kinda short on choice.
I collected a set of projects that Id use as the foundation for Lispy Snakes first games, but needed something of a framework to tie them all together, as a reusable codebase across all games and updates.
I wouldnt say killer features are present yet just a set of sensible decisions. Serpent is written in D so its highly performant with a lower barrier of entry than say C or C++. Its allowing me to flesh out a framework that suits my development ideals and pays attention to industry requirements, such as a performant multithreading Entity Component System or the sprite batching system.
When you rope together all the features and decisions, you get a portable codebase, that thanks to its choice of libraries like SDL and bgfx, will eventually run on all major platforms with minimal effort on our part. That basically means were getting OpenGL, DirectX, Vulkan and Metal “for free”.
Being able to target the latest APIs and create indie games easily, with industry standard features emerging constantly, from a framework that doesnt impose itself on your workflow…thats a pretty good combination.
![][6]
_**Its FOSS: Why did you name your company LispySnake? Did you have a pet snake with a speech impediment when you were a kid?**_
**Ikey**: Honestly? [Naughty Dog][7] was taken. Gotta love some Bandicoot. Plus, originally we were taking on some Python contracting work and I found the name amusing. Its pretty much a nonsensical name like many of my previous projects (Like Dave. Or Dave2.)
_**Its FOSS: After being an operating system developer for many years, how does it feel to be working on something smaller? Would you say that your time as an OS developer gives you an edge as a game dev?**_
**Ikey**: OS dev needs a very high level view constantly, with the ability to context switch from macro to micro and back again. Many, many moving parts in a large ecosystem.
Serpent is much more task orientated though similarities in the workflow exist in terms of defining macro systems and interleaving micro features to build a cohesive whole. My background in OS dev is obviously a huge help here.
Where it especially shines is dealing with the guts. I think a lot of indie devs (forgive me for being sweeping) are generally happy to just build from an existing kit and either embrace it or workaround the issues. There are some true gems out there like Factorio that go above and beyond and I have to hold my hat to them.
In terms of building a new kit we get to think, properly, about cache coherency, parallel performance, memory fragmentation, context switching and such.
Consumers of Serpent (when released in a more stable form) will know that the framework has been designed to leverage Linux features, not just spitting out builds for it.
![][8]
_**Its FOSS: Recently you ported your [Serpent][9] game engine from C to the [D language][10]. Why did you make this move? What features does D have over C?**_
**Ikey**: Yeah honestly that was an interesting move. We were originally working on a project called lispysnake2d which was to be a trivial wrapper around SDL to give us a micro-game library. This simply used SDL_Renderer APIs to blit 2D sprites and initially seemed sufficient. Unfortunately as development progressed it was clear we needed a 3D pipeline for 2D, so we could utilize shaders and special effects. At that point SDL_Renderer is no good to you anymore and you need to go with Vulkan or OpenGL. We began abstracting the pipelines and saw the madness ensue.
After taking a step back, I analyzed all the shortcomings in the approach, and tired of the portability issues that would definitely arise. Im not talking in terms of libraries, Im talking about dealing with various filepaths, encodings, Win32 APIs, DirectX vs OpenGL vs Vulkan…etc. Then whack in boilerplate time, C string shortcomings, and the amount of reinventing required to avoid linking to bloated “cross-platform” standard library style libraries. It was a bad picture.
Having done a lot of [Go][11] development, I started researching alternatives to C that were concurrency-aware, string-sane, and packed with a powerful cross-platform standard library. This is the part where everyone will automatically tell you to use Rust.
Unfortunately, Im too stupid to use [Rust][12] because the syntax literally offends my eyes. I dont get it, and I never will. Rust is a fantastic language and as academic endeavours go, highly successful. Unfortunately, Im too practically minded and seek comfort in C-style languages, having lived in that world too long. So, D was the best candidate to tick all the boxes, whilst having C &amp; C++ interoptability.
It took us a while to restore feature parity but now we have a concurrency-friendly framework which is tested with both OpenGL and Vulkan, supports sprite batching and has nice APIs. Plus, much of the reinvention is gone as were leveraging all the features of SDL, bgfx and the DLang standard library. Win win.
![The first game from LispySnake][13]
_**Its FOSS: How are you planning to distribute your games?**_
**Ikey**: Demo wise well initially only focus on Linux, and its looking like well use Flatpak for that. As time goes on, when weve introduced support and testing for macOS + Windows, well likely look to the Steam Store. Despite the closed source nature, Valve have been far more friendly and supportive of Linux over the years, whilst the likes of Epic Games have a long history of being highly anti-Linux. So thats a no go.
_**Its FOSS: How can people support and contribute to the development of the Serpent game engine?**_
**Ikey**: We have a few different methods, for what its worth. The easiest is to [buy a Lifetime License][14] which is $20. This grants you lifetime access to all of our 2D games and helps fund development of our game titles and Serpent.
Alternatively, you can [sponsor me directly on GitHub][15] to work on Serpent and upstream where needed. Bit of FOSS love.
[Support with Lifetime License][16]
[Sponsor the development on GitHub][15]
* * *
I would like to thank Ikey for taking the time to answer my questions about his latest project.
Have any of you created a game with open source tools? If so, what tools and how was the experience? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][17].
--------------------------------------------------------------------------------
via: https://itsfoss.com/ikey-doherty-serpent-interview/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/ikey_doherty
[2]: https://getsol.us/home/
[3]: https://lispysnake.com/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/ikey_doherty_serpent_interview.png?ssl=1
[5]: https://en.wikipedia.org/wiki/Amstrad_CPC
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/lipsy_snake_screenshot.png?ssl=1
[7]: https://www.naughtydog.com/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/lipsy_snake.png?ssl=1
[9]: https://github.com/lispysnake/serpent
[10]: https://dlang.org/
[11]: https://golang.org/
[12]: https://www.rust-lang.org/
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/last_peacekeeper_game.png?ssl=1
[14]: https://lispysnake.com/the-game-raiser/
[15]: https://github.com/sponsors/ikeycode
[16]: https://lispysnake.com/the-game-raiser
[17]: https://reddit.com/r/linuxusersgroup

View File

@ -1,113 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (8 reasons to consider hyperconverged infrastructure for your data center)
[#]: via: (https://www.networkworld.com/article/3530072/eight-reasons-to-consider-hyperconverged-infrastructure-for-your-data-center.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
8 reasons to consider hyperconverged infrastructure for your data center
======
Thinkstock
Demand for on-premises data center equipment is shrinking as organizations move workloads to the cloud. But on-prem is far from dead, and one segment thats thriving is hyperconverged infrastructure ([HCI][1]).
HCI is a form of scale-out, software-integrated infrastructure that applies a modular approach to compute, network and storage capacity. Rather than silos with specialized hardware, HCI leverages distributed, horizontal blocks of commodity hardware and delivers a single-pane dashboard for reporting and management. Form factors vary: Enterprises can choose to deploy hardware-agnostic [hyperconvergence software][2] from vendors such as Nutanix and VMware, or an integrated HCI appliance from vendors such as HP Enterprise, Dell, Cisco, and Lenovo.
**Learn more about enterprise infrastructure trends**
* [Making the right hyperconvergence choice: HCI hardware or software?][3]
* [10 of the world's fastest supercomputers][4]
* [NVMe over Fabrics creates data-center storage disruption][5]
* [For enterprise storage, persistent memory is here to stay][6]
The market is growing fast. By 2023, Gartner projects 70% of enterprises will be running some form of hyperconverged infrastructure, up from less than 30% in 2019. And as HCI grows in popularity, cloud providers such as Amazon, Google and Microsoft are providing connections to on-prem HCI products for hybrid deployment and management.
So why is it so popular? Here are some of the top reasons.
### 1) Simplified design
A traditional data center design is comprised of separate storage silos with individual tiers of servers and specialized networking spanning the compute and storage silos. This worked in the pre-cloud era, but its too rigid for the cloud era. “Its untenable for IT teams to take weeks or months to provision new infrastructure so the dev team can produce new apps and get to market quickly,” says Greg Smith, vice president of product marketing at Nutanix.
“HCI radically simplifies data center architectures and operations, reducing the time and expense of managing data and delivering apps,” he says.
### 2) Cloud integration
HCI software, such as from Nutanix or VMware, is deployed the same way in both a customers data center and cloud instances; it runs on bare metal instances in the cloud exactly the same as it does in a data center. HCI “is the best foundation for companies that want to build a hybrid cloud. They can deploy apps in their data center and meld it with a public cloud,” Smith says.
[][7]
“Because its the same on both ends, I can have one team manage an end-to-end hybrid cloud and with confidence that whatever apps run in my private cloud will also run in that public cloud environment,” he adds.
### 3) Ability to start small, grow large
“HCI allows you to consolidate compute, network, and storage into one box, and grow this solution quickly and easily without a lot of downtime,” says Tom Lockhart, IT systems manager with Hastings Prince Edward Public Health in Bellville, Ontario, Canada.
In a legacy approach, multiple pieces of hardware a server, Fiber Channel switch, host-based adapters, and a hypervisor have to be installed and configured separately. With hyperconvergence, everything is software-defined. HCI uses the storage in the server, and the software almost entirely auto-configures and detects the hardware, setting up the connections between compute, storage, and networking.
“Once we get in on a workload, [customers] typically have a pretty good experience. A few months later, they try another workload, then another, and they start to extend it out of their data center to remote sites,” says Chad Dunn, vice president of product management for HCI at Dell.
“They can start small and grow incrementally larger but also have a consistent operating model experience, whether they have 1,000 nodes or three nodes per site across 1,000 sites, whether they have 40 terabytes of data or 40 petabytes. They have consistent software updates where they dont have to retrain their people because its the same toolset,” Dunn added.
### 4) Reduced footprint
By starting small, customers find they can reduce their hardware stack to just what they need, rather than overprovision excessive capacity. Moving away from the siloed approach also allows users to eliminate certain hardware.
Josh Goodall, automation engineer with steel fabricator USS-POSCO Industries, says his firm deployed HCI primarily for its ability to do stretched clusters, where the hardware cluster is in two physical locations but linked together. This is primarily for use as a backup, so if one site went down, the other can take over the workload. In the process, though, USS-POSCO got rid of a lot of expensive hardware and software. “We eliminated several CPU [software] licenses, we eliminated the SAN from other site, we didnt need SRM [site recovery management] software, and we didnt need Commvault licensing. We saved between $25,000 and $30,000 on annual license renewals,” Goodall says.
### 5) No special skills needed
To run a traditional three-tiered environment, companies need specialists in compute, storage, and networking. With HCI, a company can manage its environment with general technology consultants and staff rather than the more expensive specialists.
“HCI has empowered the storage generalist,” Smith says. “You dont have to hire a storage expert, a network expert. Everyone has to have infrastructure, but they made the actual maintenance of infrastructure a lot easier than under a typical scenario, where a deep level of expertise is needed to manage under those three skill sets.”
Lockhart of Hastings Prince Edward Public Health says adding new compute/storage/networking is also much faster when compared to traditional infrastructure. “An upgrade to our server cluster was 20 minutes with no down time, versus hours of downtime with an interruption in service using the traditional method,” he says.
“Instead of concentrating on infrastructure, you can expand the amount of time and resources you spend on workloads, which adds value to your business. When you dont have to worry about infrastructure, you can spend more time on things that add value to your clients,” Lockhart adds.
### 6) Faster disaster recovery
Key elements of hyperconvergence products are their backup, recovery, data protection, and data deduplication capabilities, plus analytics to examine it all. Disaster recovery components are managed from a single dashboard, and HCI monitors not only the on-premises storage but also cloud storage resources. With deduplication, compression rates as high as 55:1, and backups can be done in minutes.
USS-POSCO Industries is an HP Enterprise shop and uses HPEs SimpliVity HCI software, which includes dedupe, backup, and recovery. Goodall says he gets about 12-15:1 compression on mixed workloads, and that has eliminated the need for third-party backup software.
More importantly, recovery timeframes have dropped. “The best recent example is a Windows update messed up a manufacturing line, and the error wasnt realized for a few weeks. In about 30 minutes, I rolled through four weeks of backups, updated the system, rebooted and tested a 350GB system. Restoring just one backup would have been a multi-hour process,” Goodall says.
### 7) Hyperconvergence analytics
HCI products come with a considerable amount of analytics software to monitor workloads and find resource constraints. The monitoring software is consolidated into a single dashboard view of system performance, including negatively impacted performance.
Hastings recently had a problem with a Windows 7 migration, but the HCI model made it easy to get performance info. “It showed that workloads, depending on time of day, were running out of memory, and there was excessive CPU queuing and paging,” Lockhart says. “We had the entire [issue] written up in an hour. It was easy to determine where problems lie. It can take a lot longer without that single-pane-of-glass view.”
### 8) Less time managing network, storage resources
Goodall says he used to spend up to 50% of his time dealing with storage issues and backup matrixes. Now he spends maybe 20% of his time dealing with it and most of his time tackling and addressing legacy systems. And his apps are better performing under HCI. “Weve had no issues with our SQL databases; if anything, weve seen huge performance gain due to the move to full SSDs [instead of hard disks] and the data dedupe, reducing reads and writes in the environment.”
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3530072/eight-reasons-to-consider-hyperconverged-infrastructure-for-your-data-center.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html?nsdr=true
[2]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software.html
[3]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software
[4]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html#slide1
[5]: https://www.networkworld.com/article/3394296/nvme-over-fabrics-creates-data-center-storage-disruption.html
[6]: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html
[7]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -1,173 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to assess your organization's technological maturity)
[#]: via: (https://opensource.com/open-organization/20/3/communication-technology-worksheet)
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
How to assess your organization's technological maturity
======
Implementing new communications technologies can make your organization
more open. Use this worksheet to determine whether your organization—and
its people—are prepared.
![Someone wearing a hardhat and carrying code ][1]
New communication technologies can promote and improve [open organizational principles and practices][2]—both within a company and between customers and strategic partners, leading to greater sales and business opportunities.
Previously, I've discussed how companies adopting new communication technologies tend to [fall into four basic categories][3] of investment and utilization. In this article, I'll demonstrate how someone might assess an organization's level of preparedness for technological innovations and the cultural changes it requires.
### Becoming a superstar
For the purpose of this exercise, imagine you're a salesperson working for a company that provides communication or information technology solutions to companies that need advanced information systems and would benefit by becoming "[communication superstars][3]." You'll likely want to impress upon your customers the benefits of becoming this type of user—benefits such as:
* Enhanced customer interaction: When _our_ customers' salespeople visit _their_ customers, they'll need to make the best impression they can to build some level of trust. Therefore—before offering any product—a salesperson must get a customer talking about his situation in order to discover his particular needs, concerns, and opportunities for growth. When the customer asks the salesperson questions about what he can do to address these issues, imagine our customer's salespeople being able to answer him in seconds with detailed information, instead of making the customer wait for hours, day, or even weeks for the answers. Such technologies can increase one's capacity to make proposals, lead to faster and wiser purchasing decisions, and maximize salesperson-customer interactions—all extremely important benefits.
* Better operations: In manufacturing especially, production bottlenecks can be a drain on in-process inventory costs, and alleviating those bottlenecks is critical. Knowing _exactly_ the situation (in-process inventory levels and processing speed, for example) of _every_ stage of a production line in real time can greatly [improve productivity][4].
* Development of new business strategies: With new communication technology expertise, a company could open up new markets and opportunities that would have historically been out of its reach.
### **Let's do some research**
Armed with knowledge of those benefits, again imagine you're a salesperson at an enterprise communication or information technology company. You meet a potential customer at an exhibition or business summit, and she describes the following situation to you:
> "I'm the Operations Manager of a small, local transportation company that makes deliveries within and between several of the surrounding cities. We have a fleet of 30 trucks, all of various sizes and makes. I know the company's information system must to be improved; much of our communication is done through email attachments, texting, and mobile phone calls. We have no central information operating system."
A large, public, national trucking company has set up in her area. She's studied this competitor and read several of its news releases and annual reports. She's learned the company has a centralized communications system, and that all its trucks have tracking technologies that monitor the location of every truck in operation. Trucks also feature sensors that monitor many vehicle operations, including average and specific engine RPM per route by vehicle and driver, and miles travelled in particular conditions (to determine fuel economy and maintenance schedules). An electronic parts delivery system connects this company's service operations with a network of dealers to reduce the time service technicians must wait for parts.
This is what the small local trucking company must compete against. So its operations manager asks you, the IT company salesperson, what you can do to help.
You decide that your first step is to conduct a survey to learn more about both the company's current communication technology system and the personnel's attitude toward this system in order to see what _could_ and _should_ be done to improve the situation. While there, you want to learn this trucking company's IT status _before_ making any recommendations.
I've created a worksheet you might use to facilitate this conversation.
### Taking the temperature
The first part of the worksheet can help you develop a baseline assessment of an organization's readiness for technological change.
#### Part 1: Baseline maturity relative to competitors
Generally, if the customer scores between 10 and 42, then that customer needs more assistance adopting new communication technology, but this varies by industry. If the score is between 43 and 70, then (when compared to competitors the customer is likely already mature in its use of communication technologies.
#### Part 2: Leaders' relationship to communication technologies
Next, you need to assess the company leadership's relationship to technologies, associated processes, and cultural changes. So let's make those calculations in Part 2 of the worksheet.
Here again, depending on the competitive environment, if the score is between 10 and 42, then the company is generally _not_ completely utilizing the communication technology it has. If the score is between 43 and 70, then generally the company puts the communication technology it has to good, creative use.
Organizational leaders must lead the conversation about using more advanced communication systems in the organization. Therefore management must establish training programs to teach everyone in the organization the skills required to _use_ that technology (a step often forgotten or poorly implemented).
#### Part 3: Awareness of challenges and opportunities
We now need to help the organization think more strategically about its use of communication technologies to implement open processes and open culture. We'll do this with Part 3 of the worksheet.
If an organization scores higher than 15, the company understands the communication technology landscape fairly well. If the score is between 9 and 15, the organization needs to isolate its weakest areas and remedy them. A score of less than 9 indicates that the organization should consider conducting new awareness exercises and/or communication technology discovery programs.
#### Part 4: Technological mastery relative to competitors
Next, we need to better understand the trucking company's current strategic assets and its level of technological mastery relative to competitors. We'll do that with Part 4 of the worksheet.
An organization that scores above 16 in this section likely knows where it stands and what its innovation trajectory is in comparison to competitors. A score of 7 to 16 means the organization needs to build alignment around a viable renewal path. A score of less than 7 might mean the organization should conduct a communication technology maturity assessment and update its best practices.
#### Part 5: Ability to articulate technological vision
Now let's explore how well the organization's senior leaders can articulate a vision for the role communication technology will play in the company's future. That's Part 5 of the worksheet.
If the organization scores over 24, then its members likely believe its executives are aligned on a technological vision (and are considering competitors). A score of 14 to 24 should prompt us to isolate the root causes of the concerns and work with the team to remedy them. Anything less than 14 should spur a structured senior executive alignment initiative.
Questions like these can clarify the extent to which employees must be involved in communication technology investment decision-making and utilization. Front-line members typically know what's necessary, what's available, and what the organization should introduce.
### From vision to action
I've seen first-hand that in situations like these, purchasing technologies is only half the problem. Getting people to buy into the system and use it to full capacity are far bigger challenges.
In this section, we'll assess the organization's ability to translate technological vision into action.
#### Part 6: Ability to translate vision to action
First, let's see how the company is currently converting its vision into an action plan relative to competitors. We'll return to the worksheet, specifically Part 6.
A company scoring more than 17 points likely has a robust plan and evaluation system in place, and is focused on engaging people in executing technological adoption efforts relative to competitors. Organizations scoring 7 to 17 should review the action plan and milestone checklist weekly for content and alignment. Those scoring less than 7 should conduct a full review of its milestone checklist and action plan processes.
#### Part 7: Supervision strategies
Few plans succeed without proper supervision, so you'll want to assess the organization's plans to oversee change management efforts. We'll use the trusty worksheet—this time, Part 7.
Did the company score something greater than 15? Then its supervision model is in good shape. Maybe 8 to 15? It should check its governance principles and/or program leadership. Less than 8? Time to rework (or design for the first time) its supervision principles.
#### Part 8: Funding strategy for implementation
Of course, organizational initiatives like these require funding. So you'll want to assess the organization's financial commitment to technological change. Once again, let's use our worksheet (Part 8 this time).
Scoring more than 16 points means the company's funding for new communication technologies is strong. Scoring 8 to 16 means the company should work to ensure that the company portfolio, funding, and business strategy are better aligned. Anything less than 8 means company needs to rework its investment and funding strategy for new technologies.
#### Part 9: Clarity and promotion of vision
Organizational leaders should constantly be clarifying and advocating plans to adopt new technologies. How are they doing? Let's review Part 9 of our worksheet.
If the company scores over 17, then it's likely doing a good job of marketing its ambitions. If it scores somewhere between 7 and 17, it should isolate dimensions of its messaging that need refinement and work with the team to remedy them. If it scores less than 7, it should consider developing a specific program to convey the company's ambition more broadly.
#### Part 10: Ability to build and sustain engagement
Changes to technological systems and processes don't happen automatically. People need to invest in them, and leaders need to sustain their engagement in the organizational changes. Not everyone will buy in (as I've [written previously][5]). We can assess how well the organization is doing this with Part 10 of the worksheet.
A score over 23 indicates that the company is doing a good job building momentum while introducing communication technologies. A score of 12 to 23 means that organization might need to isolate some part of the process that's not proceeding satisfactorily and remedy that component. Less than 12? The company needs to design and conduct a full engagement program.
### Organizational considerations
This final section assesses specific _organizational_ capacities—that is, the organization's culture, its structure, and its processes. Becoming more open by adopting new communication technologies is only possible if the organization itself is flexible and willing to change.
#### Part 11: Organizational culture
Is the organizational environment amenable to the kinds of changes necessary for effectively adopting new communication activities? We'll assess that in Part 11 of our worksheet.
An organization scoring more than 16 points is already shifting its organizational behaviors and culture ahead of competitors. One scoring between 7 and 16 points should investigate root causes of concerns about cultural changes and work with the team to remedy problems. An organization scoring less than 7 should begin working to shift its culture around communication practices and expectations.
#### Part 12: Organizational structure
Does the organization's current structure allow it to sustain communication technology innovations? Use Part 12 of the worksheet to gather an initial impression.
Scoring over 16 means the company possesses the basic structural capabilities necessary for sustained, steady, technical changes. Scoring between 8 and 16 means the company has only begun implementing projects aimed at developing necessary structural capabilities, but more effort is needed. And a score of less than 8 indicates that the company needs to consider specific programs for improving basic structural capabilities.
#### Part 13: Reward and incentive structures
Are the organization's reward and incentive structures aligned with the organization's goals for introducing and adopting new communication technologies? Let's look at the worksheet one last time.
A score over 14 indicates that the company's current reward structures are aligned with its communication technology objectives. A score between 6 and 14 tells us that the organization should build stronger consensus around a viable reward strategy aligned to communication technology renewal. And a score of less than 6 should prompt leadership to implement specific reward structures that accomplish its communication technology adoption goals.
### Post-survey debrief
After collecting those data, you're now in a position to ask how your information technology company can help your potential customer in four areas:
1. Data gathering and company strategy analytics
2. Social media, internet utilization, and interaction internally
3. Telecommunication utilization within company (to avoid excess and unnecessary traveling for meetings, etc.)
4. Automation technology utilization within the company
You're also able to inquire within your company (among solution architects, for example) who we potentially should partner with if need be in order to achieve this transportation company's goals in these four areas."
In these kinds of strategic partnerships, open organization principles—especially transparency, inclusivity, collaboration, and community—come alive. One person cannot do this kind of work alone.
I've seen first-hand that in situations like these, purchasing technologies is only _half the problem._ Getting people to _buy into the system_ and _use it to full capacity_ are far bigger challenges. These challenges are cultural, not technological. Being a "communication superstar" means being great in both those areas—both the communication technology itself, as well as the culture and process expertise necesasry for actual utilization.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/3/communication-technology-worksheet
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/open-organization/resources/open-org-definition
[3]: https://opensource.com/open-organization/20/1/communication-technology-superstars
[4]: https://www.slideshare.net/RonMcFarland1/improving-processes-65115172?qid=b0a0fde3-62c6-4538-88c8-1bfd70485cee&v=&b=&from_search=5
[5]: https://opensource.com/open-organization/17/1/escape-the-cave

View File

@ -1,78 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Key Takeaways from Ciscos Annual Internet Report)
[#]: via: (https://www.networkworld.com/article/3529989/key-takeaways-from-cisco-s-annual-internet-report.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Key Takeaways from Ciscos Annual Internet Report
======
Businesses need to be ready for the massive wave of devices and bandwidth that are coming in the next vide years
Natalya Burova / Getty Images
By 2023, two-thirds of the worlds population will have Internet access—thats 5.3 billion total Internet users, compared to 3.9 billion in 2018. The number of devices and connections will also skyrocket. There will be 3.6 networked devices per capita by 2023, whereas in 2018, there were 2.4 networked devices per capita.
These findings come from Ciscos _[Annual Internet Report (2018 2023) ][1]_(AIR) - previously known as Visual Network Index (VNI), which assesses the digital transformation across different business segments and their adoption of networking technologies, including fixed broadband, Wi-Fi, and mobile (3G, 4G, 5G). 
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
The report described an increased demand for new or enhanced applications that boost workforce productivity or improve customer experiences. In todays mobile world, users expect their devices (and networks) to deliver on all fronts: quality, ease of use, and seamless connectivity. 
[Cisco][3]
The report can be useful as companies plan out their network strategies. One of the aspects of the VNI that Cisco carried over to AIR is an [online tool][4] that lets people slice and dice the information by country, device or other factors. They also included an [“Internet readiness” tool ][5]that explores how prepared different regions are for the coming wave of devices and need for bandwidth. 
**More network automation is needed**
To meet growing demand for enhanced apps, enterprises need automated network monitoring and optimization, andt that can be achieved with software-defined wide area networking (SD-WAN). Software-driven networks create more flexible infrastructures that can adapt to changing traffic requirements, which becomes necessary as more enterprises move to hybrid clouds, the report says.
Policy-based automation and Intent-Based Networking (IBN) are just as important when it comes to building agile, portable, and scalable networks. IBN, as the name implies, captures business intent through analytics and machine learning. One trend Cisco observed in its report is how business WAN traffic flow patterns are becoming more software-based and hybrid in nature, creating a need for IBN solutions, the report says.
[][6]
**SD-WAN is core to network success**
SD-WAN is important to the network edge, which brings computing, storage, and networking resources closer to users and devices. Cisco found many use cases driving the need for greater edge-computing capabilities. One of them is finding ways to control data from the billions of Internet of Things (IoT) endpoints being added to the network edge.
Out of the 29.3 billion networked devices in use by 2023, about half will support various IoT applications, per Ciscos report. As for machine-to-machine (M2M) communication, there will be 14.7 billion connections by 2023. Consumers will hold the biggest share (74%) of total devices and connections, with businesses claiming approximately 26%. However, the consumer share will grow at a slower rate than business.
How will enterprises manage to secure all networked devices and data? Cisco recommends creating a security policy that strikes a balance between data protection and ease of use. In other words, networks will have to be intelligent enough to grant access to the right users without putting them through a difficult authentication process.
**Network managers still struggle to lower operational costs**
Network managers continue to struggle with rising operational costs, as the explosion of devices and data outpaces IT resources. Cisco found nearly 95% of network changes are still performed manually, resulting in operational costs that outweigh network costs. Thats where IT automation can help, enabled by SDN, intelligent network-edge enhancements, and unified domain controls.
In addition to exploring business-specific networking needs, Cisco outlined some trends in consumer and small-to-medium business (SMB) markets. Here are the key takeaways:
* **Next-generation applications**—built with artificial intelligence (AI) and machine learning—will create complex requirements and new business models. Mobile applications, specifically, will drive future consumer, SMB, and enterprise needs, with 299.1 billion mobile apps downloaded worldwide by 2023.
* **Mixed devices and connections** are enabling myriad M2M apps. Connected-home, video-surveillance, connected appliances, and tracking apps will make up 48% of M2M connections by 2023. Connected-car apps will be the fastest-growing category, with connected cities coming in second.
* **Accelerating broadband speeds** will affect traffic growth and use of high-bandwidth content and applications. Average broadband speeds will more than double globally from 45.9 Mbps (in 2018) to 110.4 Mbps (in 2023). Fiber-to-the-home (FTTH), high-speed DSL, and cable broadband adoption will contribute to the growth.
* **Wi-Fi will gain momentum** as devices and IoT connections increase. By 2023, the number of public Wi-Fi hotspots will grow to 628 million, up from 169 million in 2018. Wi-Fi 6 promises to boost speeds by up to 30%, compared to the current generation. More importantly, next-gen Wi-Fi will significantly improve real-time communications and high-definition video, impacting both consumer and business sectors.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3529989/key-takeaways-from-cisco-s-annual-internet-report.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: http://www.cisco.com/go/ciscoair
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html
[4]: https://www.cisco.com/c/en/us/solutions/executive-perspectives/annual-internet-report/air-highlights.html
[5]: https://www.cisco.com/c/en/us/solutions/service-provider/cloud-readiness-tool/index.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Most-used libraries, open source adoption, and more industry trends)
[#]: via: (https://opensource.com/article/20/3/libraries-5G-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Most-used libraries, open source adoption, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Most-used libraries revealed plus 10 things developers should be doing to keep their code secure][2]
> “The report begins to give us an inventory of the most important shared software and potential vulnerabilities and is the first step to understand more about these projects so that we can create tools and standards that results in trust and transparency in software," explained Jim Zemlin, executive director at the Linux Foundation, in a statement.
**The impact**: Importantly, there is also a great list of packages for backdoors here.
## [Survey: Open source adoption, quality gains][3]
> Overall, the survey finds there has been [a marked shift away from proprietary software][4]. Only 42% said that more than half of the software they use today is proprietary, down from 55% a year ago. Two years from now only 32% said they expect proprietary software to account for more than half their portfolio. On average, respondents said 36% of their organizations software is open source, which in two years is expected to increase to 44% in two years. A total of 77% said they would increase usage of open source software over the next 12 months.
**The impact**: There is a clear virtuous cycle of companies getting more comfortable with open source and more open source software being created. If there isn't already, there will be a rule 34 about open source software.
## [5G must go cloud-native from edge to core][5]
> A containerised core will be the heart of cloud-native 5G networks. Managing and scaling networking apps in containers using a modular microservices approach will help service providers to dynamically orchestrate and grow service capacity across a distributed architecture.
**The impact**: When you're building something complicated and reliable, you really can't look past starting with open source software. Unless you want to be in a foot race against "a Kawasaki" (that's a motorbike, right?).
## [High-performance object storage, Kubernetes, + why you can't containerize a storage appliance][6]
> True multi-tenancy isnt possible unless the storage system is extremely lightweight and able to be packaged with the application stack. If the storage system takes too many resources or contains too many APIs, it wont be possible to pack many tenants on the same infrastructure.
**The impact**: The title of this post is a challenge to someone much more skilled and knowledgable than I.
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/libraries-5G-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.theregister.co.uk/2020/02/20/linux_foundation_report/
[3]: https://devops.com/surevey-sees-open-source-adoption-quality-gains/
[4]: https://devops.com/devops-deeper-dive-devops-accelerates-open-source-innovation-pace/
[5]: https://www.5gradar.com/features/5g-must-go-cloud-native-from-edge-to-core
[6]: https://blog.min.io/high-performance-object-storage-with-kubernetes/

View File

@ -1,79 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Seawater, humidity inspire new ways to generate power)
[#]: via: (https://www.networkworld.com/article/3529893/seawater-humidity-inspire-new-ways-to-generate-power.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Seawater, humidity inspire new ways to generate power
======
Researchers around the globe are working on new ways to generate huge amounts of power that will be needed for the shift to a data-driven society.
Getty Imags
The possiblity of a future power-availability crunch spurred in part by a global increase in data usage is driving researchers to get creative with a slew of new and modified ways to generate and store energy.
Ongoing projects include the use of seawater for batteries; grabbing ambient humidity; massive water storage systems for hydropower; and solar panels that work at night. Here are some details:
### Batteries based on seawater
Seawater will provide "super-batteries," says the University of Southern Denmark. Researchers there have been studying how to use sodium, which is abundant in seawater, as an alternative to lithium in batteries.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
"Sodium is a very readily available resource," the school says in a [press release][2], and it can be easily extracted from seawater. Lithium, on the other hand, is a limited resource that's mined only in a few places in the world, says research leader Dorthe Bomholdt Ravnsbæk of the department of physics, chemistry and pharmacy at the university. Batteries based on seawater would also alleviate the need for cobalt, which is used in lithium cells. The team in Denmark (working with Massachusetts Institute of Technology) believes it has come up with a new electrode material, based on manganese, that will make the seawater battery ultimately viable.
### Using ambient moisture to generate power
Humidity captured with bio-electronics could end up being a viable power source for sensors, say some scientists.
"Harvesting energy from the environment offers the promise of clean power for self-sustained systems," notes University of Massachusetts researchers [in an article published in Nature][3]. However, known technologies often have restrictive environmental requirements  solar panels that must be mounted outside, for example that limit their energy-producing potential.
Moisture harvesting with thin-film, protein nanowires doesn't have restrictive environmental requirements. Sustained voltages of about half a volt can be obtained from moisture present in normal, ambient air. "Connecting several devices linearly scales up the voltage and current to power electronics," the Amherst group claims. "Our results demonstrate the feasibility of a continuous energy-harvesting strategy that is less restricted by location or environmental conditions than other sustainable approaches."
[][4]
### Seasonally pumped hydropower storage 
On a larger scale, inland water storage could solve renewable power issues, say scientists at the International Institute for Applied Systems Analysis.
One big problem collecting power from the environment, as opposed to using fossil fuels, is where to store the on-the-fly electricity being generated. The Austrian organization believes that hydropower systems should be used to contain renewable energy. It's cheap, for starters. In addition, seasonal pumped hydropower storage (SPHS) is better than wind or solar, the group claims, because it not only generates the power in real time as its needed, but also isn't affected by variations— a windy day isn't required, for example.
SPHS operates by pumping water into dammed, river-adjacent reservoirs when water flow is high but power demand is low. Water is then allowed to flow out of the reservoir, through turbines—similar to hydroelectric—when energy demand increases. Electricity is thus created. The group, in a [press release][5] related to a study just released, says the technique is highly economical, even including required land purchases, excavation and tunneling.
### Nighttime, anti-solar cells
Contrary to popular belief, photovoltaic solar panels don't actually need full sun to function. Cloud cover allows some to work just fine, just not as well. Nighttime photovoltaic, however, is something more radical:
The earth should be used as a heat source, and the night sky a heat sink, say Jeremy Munday and Tristan Deppe of the department of electrical and computer engineering at University of California, Davis. They shared their idea for nighttime photovoltaic cells in an [abstract of a paper][6] published by American Chemical Society's ACS Photonics.
What they are suggesting is using thermoradiative photovoltaics, where deep space radiative cooling ([which Ive written about before][7]) is combined with photovoltaics. Current is created as infrared light or heat, in other words is radiated into extremely cold, deep space.
"Similar to the way a normal solar cell works, but in reverse," Munday says of their anti-solar panel concept, quoted in a [UC Davis news article][8]. 
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3529893/seawater-humidity-inspire-new-ways-to-generate-power.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.sdu.dk/en/nyheder/Forskningsnyheder/skal_fremtidens_superbatterier_laves_af_havvand
[3]: https://www.nature.com/articles/s41586-020-2010-9
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://iiasa.ac.at/web/home/about/news/200219-seasonal-pumped-storage.html
[6]: https://pubs.acs.org/toc/apchd5/7/1
[7]: https://www.networkworld.com/article/3222850/space-radiated-cooling-cuts-power-use-21.html
[8]: https://www.ucdavis.edu/news/anti-solar-cells-photovoltaic-cell-works-night
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Chinese auto giant Geely plans a private satellite network to support autonomous vehicles)
[#]: via: (https://www.networkworld.com/article/3530336/chinese-auto-giant-geely-plans-a-private-satellite-network-to-support-autonomous-vehicles.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Chinese auto giant Geely plans a private satellite network to support autonomous vehicles
======
Geely is developing a satellite network to provide high-bandwidth wireless needed by on-board applications in self-driving vehicles.
Olivier Le Moal / Getty Images
What does a large automaker thats morphing into a mobile-technology company and heavily investing in autonomous vehicles need to add to its ecosystem? Probably connectivity, and thats likely why Chinese car giant Geely says it will be building its own satellite data network.
A need for “highly accurate, autonomous driving solutions,” is part of whats driving the strategy, the company says in a [press release][1]. Geely the largest car maker in China and whose assets include Volvo and a stake in Lotus has begun building a test facility in Taizhou City where it will develop satellite models, the company says.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“The creation of a truly smart, three-dimensional mobility ecosystem,” as the company describes its Geespace project, will include precise navigation, cloud computing and high-speed Internet functions. Geely is investing $326 million in the project [according to Reuters][3], citing a statement from the company.
Over-the-air updating of vehicle software is a principal reason data networks will become prevalent in automobile technology. Historically, car companies havent worried much about the speedy updating of end-users systems, in part because theyve liked getting customers back into the dealership to upsell service options and pitch new cars. A leisurely software patch while the customer hangs around drinking warm coffee and watching daytime soaps suits that purpose. However, autonomous cars are a different story: The safety of self-driving cars cant tolerate software vulnerabilities.
Control over vehicle positioning also comes into play. Knowing where the car is and where obstacles are is more important than in traditional vehicles. Lane-change and accident avoidance, for example, are autonomous-vehicle features that require high levels of accuracy.
“The Geespace low-orbit satellite network will offer much higher centimeter-accurate precision,” Geely says, comparing its proposed constellation with the U.S. government-owned Global Positioning System.
Data processing, artificial intelligence and infotainment onboard the vehicles all need fat networks, too. Former Intel CEO Brian Krzanich [said at a talk I attended a few years ago][4] that he thought cars would soon create 4,000 GB of data per hour of driving because of the number of sensors, such as cameras, that theyll be equipped with.
[][5]
The Geely private satellite network is the first of its kind for an industrial use and joins [a trend in private wireless networking][6]. Private, terrestrial 5G networks and private LTE networks allow companies to control their own data and uptime, rather than relying on service providers. Mercedes-Benz is reportedly working on a private 5G network for privacy and security.
“As vehicles become more connected and integrated into the Internet of Things ecosystem, the demand for data has grown exponentially,” Geely says.
Geely will begin launching the Geespace satellites by the end of 2020.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3530336/chinese-auto-giant-geely-plans-a-private-satellite-network-to-support-autonomous-vehicles.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: http://zgh.com/media-center/news/2020-03-03-1/?lang=en
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.reuters.com/article/geely-china-satellite-autonomous/chinas-geely-invests-326-mln-to-build-satellites-for-autonomous-cars-idUSL4N2AV45H
[4]: https://www.networkworld.com/article/3147892/one-autonomous-car-will-use-4000-gb-of-dataday.html
[5]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[6]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Report: Most IoT transactions are not secure)
[#]: via: (https://www.networkworld.com/article/3530476/report-most-iot-transactions-are-not-secure.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Report: Most IoT transactions are not secure
======
Data gathered by security provider Zscaler shows that not only are most internet-of-things transactions unsecured, they are also unauthorized as IoT creeps in as shadow-IT devices.
Iot
The majority of [Internet of Things (IoT)][1] transactions dont use even basic security, and there is a great deal of unauthorized IoT taking place inside the perimeter of enterprise firewalls thanks to shadow IT, a new study finds.
Security vendor Zscaler analyzed nearly 500 million IoT transactions from more than 2,000 organizations over a two-week period. [The survey][2] found 553 different IoT devices from more than 200 different manufacturers, many of which had their security turned off.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
The study was done on Zscalers own Internet Access security service. It found the rate of IoT growth to be explosive: When it first started monitoring IoT traffic in May 2019, IoT traffic generated by its enterprise customer base was 56 million IoT transactions per month. By February 2020, that number had soared to 33 million transactions _per day_, or one billion IoT transactions per month, a 1,500% increase.
Zscaler is a bit generous in what it defines as enterprise IoT devices, from devices such as data-collection terminals, digital signage media players, industrial control devices, medical devices, to decidedly non-business devices like digital home assistants, TV set-top boxes, IP cameras, smart home devices, smart TVs, smart watches and even automotive multimedia systems.
“What this tells us is that employees inside the office might be checking their nanny cam over the corporate network. Or using their Apple Watch to look at email. Or working from home, connected to the enterprise network, and periodically checking the home security system or accessing media devices,” the company said in its report.
Which is typical, to be honest, and let (s)he who is without sin cast the first stone in that regard. Whats troubling is that roughly 83% of IoT-based transactions are happening over plaintext channels, while only 17% are using [SSL][4]. The use of plaintext is risky, opening traffic to packet sniffing, eavesdropping, man-in-the-middle attacks and other exploits.
And there are a lot of exploits. Zscaler said it detects about 14,000 IoT-based malware exploits per month, a seven-fold increase over the previous year.
“Folks can keep their smart watches, smart closets, and whatever else they think is making them smart. Banning devices is not going to be the answer here. The answer is changing up the narrative on how we think about IoT devices from a security and risk standpoint, and what expectations we put on manufacturers to increase the security posture of these devices,” wrote Deepen Desai, Zscalers vice president of security research in a [blog post][5].
Desai said the solution is “taking a [zero-trust][6] mentality.” It may be a buzzword but, “its about security people not trusting any person or device to touch the network—that is, until you know who the user is, what the device is, and whether that user and device are allowed to access the applications theyre trying to reach.”
Naturally Zscaler sells such a solution, but he makes a valid point. This is an ages-old problem I have seen time and again; a hot new technology comes along, everyone rushes to embrace it, then they think about securing it later. IoT is no different.
Whatever your device, at least go into the settings and turn on SSL.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3530476/report-most-iot-transactions-are-not-secure.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[2]: https://info.zscaler.com/resources-industry-iot-in-the-enterprise
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
[5]: https://www.zscaler.com/blogs/corporate/shining-light-shadow-iot-protect-your-organization
[6]: https://www.networkworld.com/article/3487720/the-vpn-is-dying-long-live-zero-trust.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,111 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How I learned about burnout the hard way)
[#]: via: (https://opensource.com/article/20/3/burnout)
[#]: author: (Jason Hibbets https://opensource.com/users/jhibbets)
How I learned about burnout the hard way
======
Burnout can happen to anyone. Here are the 3 things I wish I knew before
I burned out.
![Light bulb][1]
In early 2017, I was mentally in a bad spot. It was the perfect storm of stress, the kind that no one asks for, but you deal with the hand you're dealt. Work was piling up to a point where I couldn't process all the things that were expected of me. I was training for spring half-marathons, which should have been stress relief, but I was putting too much pressure on myself to perform at a high level. And then on top of the everyday family obligations, a surgery in our household turned us into a one-car family and seriously added to the mounting pressure on me to provide and take care of the family.
Then I broke.
It wasn't one thing. It was the culmination of things. And it hit me from the blind side, unexpected. I never thought I would be a victim of burnout. I was aware of it and thoughtful about the community I was managing. But "not me," I thought to myself, "I've got this under control." I remember thinking that something was wrong; something was off. But I couldn't quite put my finger on the source.
I distinctly remember the day where I cried at work, crumbling under the pressure that I was putting on myself. I consider myself a high performer in the office environment. I push myself to exceed the goals that my team co-creates because I want that success. I want the feeling that comes with it. But this experience was different. This wasn't a healthy win for my team or me. I felt like I let everyone down, including myself.
I was attending South by Southwest in Austin, Texas, where I was [presenting my first Ignite Talk][2] on applying open source principles to government—a talk that was well received by the audience. I remember practicing, and practicing, and practicing more the day before and the morning of my talk. I got that high that comes after delivering a great talk. I had a book signing at the City of Raleigh's Economic Development booth during the event, which was another emotional boost. Life was good. Upon reflection, that's when I started noticing signs of my burnout.
I didn't have much of an appetite. I was tired all the time. I was sleeping in, and not because of jet lag. I was exercising but wasn't getting the endorphins I was used to. And I wasn't motivated to do the work that I normally love to do. I was very blah and meh about getting work done or hanging out with people I love. These are all signs of depression and burnout.
After the trip, I scheduled my annual physical and talked to my doctor about my situation, who recommended I see a psychologist. I sat on the couch and talked things out. I was diagnosed with severe anxiety, which was enough for me to know that I didn't want to know what true depression felt like.
I learned my lesson the hard way. I'd like to share my experience so that you can recognize the signs and avoid going down this path. And before we move on, I must say that it's perfectly fine to ask for help. Ask a trusted co-worker or friend for help or guidance. We're human, and we need to help each other through the ups and the downs.
### Three things to know about burnout
Work burnout is a form of depression where you are not motivated to do the things that are expected of you at your job. It's not the occasional slacking off or spring fever because the weather is nice. It's a buildup of emotional stress where you don't want to do what is asked of you at work. There are numerous factors that can lead to burnout.
#### Know the signs of burnout
Lesson number one about burnout is to know the signs. I mentioned some of the things I was experiencing, but there are many others. I remember one thing that was extremely abnormal for me (because I'm so social) is that I started to separate myself from my usual team activities and people.
* Hey Jason, want to grab lunch with us? Nope, I'm too busy.
* Hey Jason, Matt's in town, want to join us for happy hour? No. I've got work to do.
This is totally unlike me. I would normally have said yes to both those opportunities. According to the [Mayo Clinic][3], here are a few things to ask yourself if you think you are experiencing burnout:
* Do you drag yourself to work?
* Do you have trouble getting started with work?
* Are you cynical or critical at work?
* Have you become irritable or impatient with co-workers or customers?
* Do you lack the energy to be productive?
* Do you find it hard to concentrate?
* Do you lack satisfaction from your achievements?
* Do you feel disillusioned about your work?
* Are you using food, drugs, or alcohol to feel better or to simply not feel?
* Have your sleep habits changed?
* Are you troubled by unexplained headaches, stomach or bowel problems, or other physical complaints?
You can check your own burnout risk at [BurnoutIndex.org][4], an anonymous online questionnaire created in response to the [high level of burnout][5] in the tech industry.
#### Prevent burnout
The second lesson is to identify ways [to prevent burnout][6]. First, take time away from your job and plan time to unplug and unwind. This means planning vacations, staycations, or other time away from work. It's sometimes hard to unplug like this with the pressures and obligations we put on ourselves.
There are three different levels of paid time off (PTO):
1. **Best way to unplug:** I'm totally cut-off, not logging in, not checking email.
2. **Decent way to unplug:** I'm kind of checking in, but not as responsive as normal.
3. **Meh way to unplug:** I'm available if you need me, I'll monitor email, but I'm away from normal office life.
Your situation will dictate which of these levels of time off will work for you. In my experience, you need at least two total check-outs a year. I typically have a blend of all three throughout the year, but since 2017, I have taken at least three week-long vacations each year to completely escape. It's working so far!
#### Manage stress
The third and final lesson is to manage stress effectively. My first go-to for stress management is exercise. I'm addicted to it. I work out pretty much every single day. And I like to mix it up: Cardio, weight lifting, swimming, running, cycling, surfing, and high-intensity interval training (HIIT) are staples in my exercise routine. I used to focus solely on running four to six half marathons a year, but I recently switched to triathlons. The multidisciplinary aspect of the activity has brought more joy and different challenges to my life.
Another way to reduce stress is to manage your time better. Time is our most precious resource. You've got to choose how you want to spend your time. Family, work, self, social? It's up to you. Find ways to work more efficiently, more effectively, and make sure that you put yourself first. It may sound selfish, but as I've learned from the airplane preflight safety videos, "you need to put your mask on first before helping others."
### Conclusion
Burnout can lead to fatigue, excessive stress, sadness, anger, irritability, insomnia, alcohol or substance misuse, heart disease, and other medical conditions—all things that are not good for humans or for your team at work. I hope you can use these tips to put yourself first, reduce stress, and prevent burnout.
* * *
_Jason Hibbets will present "[10 things I wish I knew before experiencing burnout][7]" at [SCaLE 18x][8], March 58, 2020, in Pasadena, Calif. This article is a preview for the talk and a way to share a bit of his experience._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/burnout
作者:[Jason Hibbets][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jhibbets
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB (Light bulb)
[2]: https://schedule.sxsw.com/2017/events/PP96070
[3]: https://www.mayoclinic.org/healthy-lifestyle/adult-health/in-depth/burnout/art-20046642
[4]: https://burnoutindex.org/
[5]: https://opensource.com/article/19/11/burnout-open-source-communities
[6]: https://www.redhat.com/sysadmin/tips-avoiding-burnout
[7]: https://www.socallinuxexpo.org/scale/18x/presentations/10-things-i-wish-i-knew-experiencing-burnout
[8]: https://www.socallinuxexpo.org/scale/18x/

View File

@ -1,193 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is Linux and Why There are 100s of Linux Distributions?)
[#]: via: (https://itsfoss.com/what-is-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
What is Linux and Why There are 100s of Linux Distributions?
======
When you are just starting with Linux, its easy to get overwhelmed.
You probably know only Windows and now you want to use Linux because you read that [Linux is better than Windows][1] as it is more secure and you dont have to buy a license to use Linux.
But then when you go about downloading and installing Linux, you learn that Linux is not a single entity. There are [Ubuntu][2], [Fedora][3], [Linux Mint][4], elementary and hundreds of such Linux variants. The trouble is that some of them look just like the other.
If thats the case, why are there multiple of those Linux operating systems? And then you also learn that Linux is just a kernel not an operating system.
![Too Many Linux!][5]
It gets messy. And you may feel like pulling out your hair. For a person who has a receding hairline, I would like you to keep your hair intact by explaining things in a way you could easily understand.
I am going to take an analogy and explain why Linux is just a kernel, why there are hundreds of Linux and why, despite looking similar, they are different.
The explanation here may not be considered good enough for an answer in an exam or interview but it should give you a better understanding of the topic.
Apology in advance!
My analogy may not be entirely correct from mechanical point of view as well. I am not knowledgeable about engines, cars and other related mechanical stuff.
But in my experience, I have noticed that this analogy helps people clearly understand the concept of Linux and operating system.
Also, I have used the term Linux OS instead of Linux distribution deliberately so that newcomers dont start wondering about distribution.
### Linux is just a kernel
_**Linux is not an operating system, its just a kernel.**_
The statement is entirely true. But how do you understand it. If you look into books, youll find Linux kernel structure described like this:
![Linux Kernel Structure][6]
There is absolutely correct, however, lets take a different approach. Think of operating systems as vehicles, any kind of vehicle be it motorbikes, cars or trucks.
What is at the core of a vehicle? An engine.
Think of kernel as the engine. Its an essential part of the vehicle and you cannot use a vehicle without the engine.
![The Operating System Analogy][7]
But you cannot drive an engine, can you? You need a lot of other stuff to interact with the engine and drive the vehicle. You need wheels, steering, gears, clutch, brakes and more to drive a vehicle on top of that engine.
Similarly, you cannot use a kernel on its own. You need lots of tool to interact with the kernel and use the operating system. These stuff could be shell, commands, graphical interface (also called desktop environments) etc.
This makes sense, right? Now that you understand this analogy, lets take it further so that you understand the rest of it.
Windows and other operating systems have kernel too
Kernel is not something exclusive to Linux. You may not have realized but Windows, macOS and other operating systems have a kernel underneath as well.
Microsoft Windows operating systems are based on [Windows NT kernel][8]. Apples macOS is based on the [XNU kernel][9].
### Think of operating systems as vehicles
Think of Microsoft as an automobile company that makes a general purpose car (Windows operating system) that is hugely popular and dominates the car market. They use their own patented engine that no one else can use. But these Microsoft cars do not offer scope of customization. You cannot modify the engine on your own.
Now come to Apple automobile. They offer shiny looking, luxury cars at an expensive price. If you got a problem, they have a premium support system where they might just replace the car.
Now comes Linux. Remember, Linux is just an engine (kernel). But this Linux engine is not patented and thus anyone is free to modify and build cars (desktop operating system), bikes (small embed system in your toys, tvs etc), trucks (servers) or jet-planes ([supercomputers][10]) on top of it. In real world, no such engine exists but accept it for the sake of this analogy.
![][11]
* kernel = engine
* Linux kernel = specific type of engine
* desktop operating systems = cars
* server operating systems = heavy trucks
* embed systems = motor-bikes
* desktop environment = body of the vehicle along with interiors (dashboard and all)
* themes and icons = paint job, rim job and other customization
* applications = accessories you put for specific purpose (like music system)
### Why there are so many Linux OS/distributions? Why some look similar?
Why there are so many cars? Because there are several vehicle manufacturers using the Linux engine and each of them have so many cars of different type and for different purposes.
Since Linux engine is free to use and modify, anyone can use it to build a vehicle on top of it.
This is why Ubuntu, Debian, Fedora, SUSE, [Manjaro][12] and many other **Linux-based operating systems (also called Linux distributions or Linux distros)** exist.
You might also have noticed that these Linux operating systems offer different variants but they look similar. I mean look at Fedoras default GNOME version and Debians GNOME version. They do look the same, dont they?
![Fedora GNOME vs Debian GNOME: Virtually No Visual Difference][13]
The component that gives the look and feel in a Linux OS is called [desktop environment][14]. In our analogy here, you can think of it as a combination of outer body and matching interiors. This is what provides the look and feel to your vehicle, does it not?
Its from the exterior that you can identify the cars into category of sedan, SUV, hatchback, station wagon, convertible, minivan, van, compact car, 4×4 etc.
But these type of cars are not exclusive to a single automobile company. Ford offers SUV, compact cars, vans etc and so do other companies like General Motors, Toyota.
![Vehicles of same type look similar even if they are from different automobile companies][15]
Similarly, distributions (Linux OSes) like Fedora, Ubuntu, Debian, Manjaro etc also offer different variants in the form of GNOME, KDE, Cinnamon, MATE and other [desktop environments][16].
Fords SUV may look similar to Toyotas or Renaults SUV. Fedoras GNOME version may look similar to Manjaro or Debians GNOME version.
#### Some type of cars consume more fuel, some desktop environments need more RAM
You probably understand the usefulness of different types of cars. Compact cars are good for driving in the cities, vans are good for long trip with family, 4×4 are good for adventures in jungles and other rough terrain. A SUV may look good and feel comfortable for sitting but it consumes more fuel than a compact car that might not be comfortable to sit in.
Similarly, desktop environments (GNOME, MATE, KDE, Xfce etc) also serve some purpose other than just providing the looks to your Linux operating system.
GNOME gives a modern looking desktop but it consumes more RAM and thus require that your computer has more than 4 GB of RAM. Xfce on the other hand may look old/vintage but it can run on systems with 1 GB of RAM.
#### Difference between getting desktop environment from distribution and installing on your own
As you start using Linux, youll also come across suggestions that you can easily install other desktop environments on your current system.
Remember that Linux is a free world. You are free to modify the engine, customize the looks on your own, if you have the knowledge/experience or if you are an enthusiastic learner.
Think of it as customizing cars. You may modify a Hundai i20 to look like Suzuki Swift Dzire. But it might not be the same as using a Swift Dzire.
When you are inside the i20 modified to look like Swiftz Dzire, youll find that it may not have the same experience from the inside. Dashboard is different, seats are different. You may also notice that the exterior doesnt fit the same on i20s body.
The same goes for switching desktop environments. You will find that you dont have the same set of apps in Ubuntu that you should be getting in Mint Cinnamon. Few apps will look out of place. Not to mention that you may find a few things broken, such as network manager indicator missing etc.
Of course, you can put time, effort and skills to make Hundai i20 look as much like Swift Dzire as possible but you may feel like getting Suzuki Swift Dzire is a better idea in the first place.
This is the reason why installing Ubuntu MATE is better than installing Ubuntu (GNOME version) and then [installing MATE desktop][17] on it.
### Linux operating systems also differ in the way they handle applications
Another major criteria on which the Linux operating systems differ from each other is the package management.
Package management is basically how you get new software and updates in your systems. Its up to your Linux distribution/OS to provide the security and maintenance updates. Your Linux OS also provides the means of installing new software on your system.
Some Linux OS provides all the new software version immediately after their release while some take time to test them for your own good. Some Linux OS (like Ubuntu) provides easier way of installing a new software while you may find it complicated in some other Linux OS (like [Gentoo][18]).
Keeping the line of our analogy, consider installing software as adding accessories to your vehicle.
Suppose you have to install a music system in your car. You may have two options here. Your car is designed in such a way that you just insert the music player, you hear the click sound and you know its installed. The second option could be to get a screwdriver and then fix the music player on screws.
Most people would prefer the hassle-free click lock installing system. Some people might take matter (and screwdriver) into their own hands.
If an automobile company provides scope for installing lots of accessories in click-lock fashion in their cars, they will be preferred, wont they?
This is why Linux distributions like Ubuntu have a more users because they have a huge collection of software that can be easily installed in matter of clicks.
### Conclusion
Before I conclude this article, Ill also like to talk about support that plays a significant role in choosing a Linux OS. For your car, you would like to have its official service center or other garages that service the automobile brand you own, dont you? If the automobile company is popular, naturally, it will have more and more garages providing services.
The same goes for Linux as well. For a popular Linux OS like Ubuntu, you have some official forums to seek support and a good number of websites and forums providing troubleshooting tips to fix your problem.
Again, I know this is not a perfect analogy but this helps understand the things slightly better.
If you are absolutely new to Linux, did this article made things clear for you or you are more confused than before?
If you already know Linux, how would you explain Linux to someone from non-technical background?
Your suggestions and feedback is welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/what-is-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-better-than-windows/
[2]: https://ubuntu.com/
[3]: https://getfedora.org/
[4]: https://linuxmint.com/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/too-many-linux-choices.png?ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/Linux_Kernel_structure.png?ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/operating_system_analogy.png?ssl=1
[8]: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT
[9]: https://en.wikipedia.org/wiki/XNU
[10]: https://itsfoss.com/linux-runs-top-supercomputers/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/linux-kernel-as-engine.png?ssl=1
[12]: https://manjaro.org/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/fedora-gnome-vs-debian-gnome.jpg?ssl=1
[14]: https://itsfoss.com/glossary/desktop-environment/
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/linux_suv_analogy.jpg?ssl=1
[16]: https://itsfoss.com/best-linux-desktop-environments/
[17]: https://itsfoss.com/install-mate-desktop-ubuntu/
[18]: https://www.gentoo.org/

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The De-Googled Android Fork is Making Good Progress)
[#]: via: (https://itsfoss.com/gael-duval-interview/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The De-Googled Android Fork is Making Good Progress
======
A couple years ago, we covered the [Eelo project][1]. If you remember, the Eelo project was started by [Gael Duval][2] who once created Mandrake Linux. The goal of the Eelo project was to remove all Google services from Android to give you an [alternate mobile operating system][3] that doesnt track you and invade your privacy.
A lot has happened to Eelo since then. Its not called Eelo anymore, now its called /e/. So, whats happening with this project? We talked to Gael Duval himself. Heres what he shared with us.
![][4]
_**Why did you create this Eelo or /e/ project in the first place?**_
**Gael:** In 2017, I realized that using Android and iPhone, Google and many mobile apps was not compatible with my personal privacy.
A later study by a US University confirmed this: using an iPhone or and Android phone sends between 6 to 12 MB of personal data to Google servers, daily! And this doesnt count mobile apps.
So I looked for reasonable alternatives to iPhone and Android phones but didnt find any. Either I found options for hobbyists, like Ubuntu Touch, that were not compatible with existing apps and not fully unGoogled either. Or there were alternative ROMs with all the Google fat inside, and no associated basic online services that could be used without tweaking the system.
Therefore, an idea came to mind: why not fork Android, remove all the Google features, even low level, such as connectivity check, DNS…, replace default apps with more virtuous apps, add basic online services, and integrate all this into a consistent form that could be used by Mum and Dad and any people without tech or expert knowledge?
_**How is it any different from other custom Android ROMs?**_
**Gael:** It doesnt send a bit of data to Google, and is and will be more and more privacy-focused.
Low-level: we remove any Android feature that sends data to Google servers. Even the connectivity check when you start the smartphone! To my knowledge, there is not any other Android ROM that does this at the moment. We change default DNS settings and offer users an option to set the DNS of their choice. We change NTP (automatic time configuration) settings to the default NTP servers because there is no reason to use Google NTP servers actually. Then we remove Google services, and we replace with a software layout called microG that can still receive push notifications and have geolocation data for apps (using Mozilla geolocation service).
Then we change the default apps by non-Google apps, including the maps applications, mail etc., most are open source applications and I can say that there is 99% probability that all will be open source before the end of this year.
Then we add our own Android application installer, with close to 80 000 available applications at the moment.
We provide a different web browser, which is a fork of Chromium, were all features that data to Google are removed, and were the default search engine is not Google…
And we operate online services:
* search, using a meta-search system that we have improve for a better user experience
* online drive with encrypted data, calendar etc. using a modified version of NextCloud
* mail…
And for we provide a unique identifier that can be used to access all those services, either on the web or from the /e/ OS system, by login once. Then you can sync all your data, calendar, email etc. between your smartphone and your personal /e/ cloud (it can also be self-hosted).
The purpose of the project is to provide a normal, ready to use, and attractive “digital life” to users, without sending all your personal data to Google.
_**If it is completely ungoogled, how do users install new apps? Do you have your own app store? If yes, how can we trust that these apps dont spy on user data?**_
**Gael:** Yes we have our own application installer, with about 80 000 applications. And we analyse each application to unveil the number of trackers, and we display this information to our users, for each application. We are also adding Progressive Web Apps soon to this application installer.
/e/ OS is about freedom of choice. We want the core system to be better, and then offer as many possible options to users, by informing them as much as possible. In short: they can still any application they need. Next step will be to offer a feature to actually block trackers used in applications.
_**What is the target user base for /e/? Can an average Joey use it without much trouble?**_
![][5]
**Gael:** We started with tech-savvy users, and were expanding the user base to people with less knowledge. At the moment, our typical user base is a mix of tech-savvy users, who can flash a smartphone with /e/ OS and people who are very concerned with Google and their data privacy but have very limited technical knowledge. For those people we have some smartphones pre-installed with /e/ OS for sale, on high-grade refurbished hardware.
We are also announcing this week an “/e/ easy installer” that will make the flashing process much more easier, by pluging the smartphone to a PC and launching a dedicated application that will make most of the job.
Then, the next step will be to expand our target users to a more global market, once we find the good partners. But clearly, there is a demand for something different than the Apple-Google worldwide market duopoly on the mobile.
_**Initially the project was named eelo and it is called /e/ or [e foundation][6]. Personally, I find the name /e/ weird and it is not easily recognizable. Why did you change the project name?**_
**Gael:** We have been “attacked” by a company called “eelloo”. They considered that “eelo” would interfere with their business. They are in the HR business solutions, but registered their trademark in all the classes related to mobile OS, smartphones etc. This is silly and a shame, but we had no money to defend us strongly at the time.
However the/e/ name will be abandonned for something else quite soon.
_**Its been a couple of years since the initial launch. How do you see the adoption of /e/?**_
**Gael:** We launched the first beta 18 months ago, and we have started to sell smartphones with /e/ a little more than 6 months ago. The adoption is growing a lot at the moment, we have to add terabytes of online storage regularly!
Also with the /e/ installer arriving, and some official partnerships with some hardware mobile manufacturers in the pipe, this is going to accelerate a lot this year.
However, this is not surprising, privacy concerns are rising both for individuals and corporations, and I think the rejection of Google is also trending.
_**What are your future plans to grow /e/?**_
**Gael:** The growth is very natural. There is a strong community of users who realize how unique our approach is. These guys are contributing, supporting us and talking a lot about the project.
With the easy installer coming along and strategic partnerships with hardware makers, this is going to accelerate a lot.
Also, and this is more personal, I think that there is a natural connection between /e/ OS, and the Linux world. OK, /e/ OS is based on Android, but its still a Linux kernel and its the same spirit, its Open Source… So Id really like to have more natural integration between my /e/ smartphone and my Linux desktop. There should be some nice features added in this spirit in the next versions of /e/ OS.
_**What can /e/ users and our readers do to help e foundation?**_
**Gael:** Join us, talk about what we are doing, send your feedback, organize some meetups… Help improve the /e/ Wikipedia page which is very poor and doesnt represent at all what we are actually doing.
We also have a [permanent crowdfunding campaign where users can support the project financially][7], pay for the servers etc. And, in addition to giving back in term of open source product, we send cool stuff in return :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/gael-duval-interview/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/eelo-mobile-os/
[2]: https://en.wikipedia.org/wiki/Ga%C3%ABl_Duval
[3]: https://itsfoss.com/open-source-alternatives-android/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/e-os-interview.jpg?ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/e-foundation-smartphones.jpg?resize=800%2C590&ssl=1
[6]: https://e.foundation/
[7]: https://e.foundation/donate/

View File

@ -1,60 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data-center power consumption holds steady)
[#]: via: (https://www.networkworld.com/article/3531316/data-center-power-consumption-holds-steady.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Data-center power consumption holds steady
======
While computing capacity has exploded in recent years, power consumption is growing more slowly thanks to greater energy efficiency.
Google
A predicted explosion in power consumption by data centers has not manifested thanks to advances in power efficiency and, ironically enough, the move to the cloud, according to a new report.
The [study][1], published in the journal _Science_ last week, notes that while there has been an increase in global data-center energy consumption over the past decade, this growth is negligible compared with the rise of workloads and deployed hardware during that time.
Data centers accounted for about 205 terawatt-hours of electricity usage in 2018, which is roughly 1% of all electricity consumption worldwide, according to the report. (That's well below the often-cited stat that data centers consume 2% of the world's electricity). The 205 terawatt-hours represent a 6% increase in total power consumption since 2010, but global data center compute instances rose by 550% over that same time period.
**[ Now read: [What is quantum computing (and why enterprises should care)][2] ]**
To drive that point home: Considerably more compute is being deployed, yet the amount of power consumed is holding steady.
The paper cites a number of reasons for this. For starters, hardware power efficiency is vastly improved. The move to server virtualization has meant a six-fold increase in compute instances with only a 25% increase in server energy use. And a shift to faster and more energy-efficient port technologies has brought about a 10-fold increase in data center IP traffic with only a modest increase in the energy use of network devices.
Even more interesting, the report claims the rise of and migration to hyperscalers has helped curtail power consumption. 
Hyperscale data centers and cloud data centers are generally more energy efficient than company-owned data centers because there is greater incentive for energy efficiency. The less power Amazon, Microsoft, Google, etc., have to buy, the more their bottom line grows. And hyperscalers are big on cheap, renewable energy, such as hydro and wind.
[][3]
So if a company trades its own old, inefficient data center for AWS or Google Cloud, they're reducing the overall power draw of data centers as a whole.
"Total power consumption held steady as computing output has risen because of improvement efficiency of both IT and infrastructure equipment, and a shift from corporate data centers to more efficient cloud data centers (especially hyper scale)," said Jonathan Koomey, a Stanford professor and one of the authors of the research, in an email to me. He has spent years researching data center power and is an authority on the subject.
"As always, the IT equipment progresses most quickly. In this article, we show that the peak output efficiency of computing doubled every 2.6 years after 2000. This doesnt include the reduced idle power factored into the changes for servers we document," he added.
Koomey notes that there is additional room for efficiency improvements to cover the next doubling of computing output over the next few years but was reluctant to make projections out too far. "We avoid projecting the future of IT because it changes so fast, and we are skeptical of those who think they can project IT electricity use 10-15 years hence," he said.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3531316/data-center-power-consumption-holds-steady.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://science.sciencemag.org/content/367/6481/984
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Setting yourself up for success while working remotely)
[#]: via: (https://opensource.com/article/20/3/remote-work)
[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
Setting yourself up for success while working remotely
======
Whether you are new to working remotely or are a seasoned veteran, here
are tips to improve the experience.
![Woman sitting in front of her computer][1]
Remote work is not easy. While there are perks to being remote, it is a mind-shift and takes some getting used to. Talk to anybody that works remotely, and they will likely tell you some of the biggest challenges of remote work are _**feeling disconnected**_ and _**a loss of regime**_. Here are my tips gathered from 10 years as a remote worker on how to set yourself and your team up to work remotely successfully.
### Environment and regime
1. **"Commute" to and from work**. I'm not saying go to the extremes in this [_Audible commercial_][2], but I suggest leaving your home to go for a walk or a bike ride before you begin working. Do the same at the end of the day. I take my dog, Barley (pictured here), for a walk at the start and end of most days.![Here's my dog, Barley, who gets regular walks during my morning &quot;commute&quot;][3]
2. **Get dressed.** Don't be tempted to work in your PJs, because this blurs the line between work and home. People sometimes say that being able to work in your pajamas is a perk of remote work, but studies show this isn't great advice. You don't need to put on a suit and tie, but do change out of your pajamas; otherwise, before you know it, you will have gone three days without changing your clothes. For more reasons why check out this article from [_Fast Company_][4].
3. **Eat lunch away from your desk**. This is good advice even if you aren't working remotely.
4. **Stick to a schedule**. It's easy to start working as soon as you wake up and continue late into the evening. Set a start time and an end time for your day and stick to it. When I stop work for the day, I try to close my office door. Configure your working hours in every app you use so others know when you are available. Don't use your work computer outside of working hours if you can.
5. **Set up a dedicated work environment**, if possible. Try not to work from the kitchen table. This blurs the lines between home and work. My office (picture below) also has space for comfortable seating and desk seating to switch between the two.
6. **Check-in with your team** or friends in the morning. Don't mistake this for a daily stand-up; this is more like saying _hi_ when you're getting coffee.
7. **Sign-off at the end of the day**. This means both letting your team members know you are leaving and actually walking away from where you are working. Close the laptop. Turn off Slack notifications, etc.
8. **Keep people posted** if you are leaving early or unavailable. It helps build trust.
9. **Invest in a headset** if you will be doing a lot of calls. If there is more than one person in your household that is working remotely, they will thank you for this. It is no fun listening to somebody else's conference call.
10. **Turn on your video** when on a video call to help you feel connected and stay engaged. When your video is disabled, it is easy to wander off and get distracted by Slack (or its [open source alternatives][5]), Twitter, or any other number of distractions.
11. **Set up a weekly, casual remote chat**. At my company, we meet on Friday mornings via Zoom (or the open source alternative, [Jitsi][6]). This chat is open to remote and non-remote staff. It is an open call to talk about whatever is on our minds. Topics have ranged from music preferences to parenting challenges to what people are doing over the weekend.
12. **Set-up chat-roulette **if one-on-one interaction is more your thing. There are applications on most chat platforms that randomly pairs two employees to chat and get to know one another.
13. **Ask for help**. Chat with your colleagues if you're stuck, need encouragement, or need to vent. You are not alone. You are a member of a team. You can still grab a coffee or go for a walk with a teammate remotely.
![Heres my home office set up][7]
Everybody is different. These tips work for me, Id love to hear you share your advice below!
Not all work-from-home gigs are created equal. There is a vast ocean between being a member of a...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/remote-work
作者:[Dawn Parzych][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dawnparzych
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer)
[2]: https://www.youtube.com/watch?v=oVCJhZhrJ04
[3]: https://opensource.com/sites/default/files/resize/pictures/barleywalking_0-300x379.png (Here's my dog, Barley, who gets regular walks during my morning "commute")
[4]: https://www.fastcompany.com/3064295/what-happened-when-i-dressed-up-to-work-from-home-for-a-week
[5]: https://opensource.com/alternatives/slack
[6]: https://meet.jit.si/
[7]: https://opensource.com/sites/default/files/pictures/6151f75e-bbeb-4e64-a5ca-0bbe4e981054.jpeg (Heres my home office set up)

View File

@ -1,92 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Next wave of digital transformation requires better security, automation)
[#]: via: (https://www.networkworld.com/article/3531448/next-wave-of-digital-transformation-requires-better-security-automation.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Next wave of digital transformation requires better security, automation
======
F5 report highlights the challenges of digital initiatives, including a struggle to secure multi-cloud environments and a lack of IT skills required to extend automation efforts.
Loops7 / Getty Images
Digital transformation is a top-of-mind priority for CIOs who want innovative ways to deploy applications and run IT operations. In today's digital economy, companies that don't depend on applications to support their business are rare. On the contrary, most companies have some kind of digital transformation initiative in place, which is driving the adoption of cloud-native architectures and application services.
A new report from application delivery vendor F5 Networks finds businesses are entering the second phase of digital transformation by automating more parts of their networks. Based on a survey of nearly 2,600 senior leaders globally—from various industries, company sizes, and roles—the [_2020 State of Application Services Report_][1] uncovered five key trends shaping the application landscape.
**READ MORE:** [Top 10 underused SD-WAN features][2]
### 1\. Every company is undergoing a digital transformation
Having conducted its annual survey for six years in a row, F5 consistently found IT optimization and business process optimization to be the top reported benefits for companies with digital transformation initiatives. At this point, most companies have mastered automation of individual tasks by digitizing IT and business processes—which the report classifies as phase one of digital transformation.
Moving on to phase two, companies are shifting their focus to reducing complexity and supporting apps with a consistent set of services. External-facing apps make up a portion (45%) of an average company's portfolio and help generate revenue. Yet, the internal-facing ones—like productivity and operational apps—are vital to digitizing business processes.
Modern, microservices/cloud-native apps now make up approximately 15% of a company's portfolio, compared to 11% for mainframe-hosted apps. This mix of new and older generation apps indicates that businesses are dealing with a diverse app portfolio. As more businesses adopt an application-centric mindset, they can start managing their app portfolio like a business asset.
### 2\. Organizations struggle to secure multi-cloud environments
Every company has different needs, which is why most choose the best cloud for their applications on a case-by-case basis, the report finds. Businesses are adopting cloud platforms at a high rate, with 27% planning to have more than half of their applications in the cloud by the end of 2020.
For 87% of companies, multi-cloud is the preferred choice due to its flexibility. Multi-cloud typically includes a mix of infrastructure-as-a-service (IaaS) environments, so a company can choose to deploy multiple software-as-a-service (SaaS) or platform-as-a-service (PaaS) cloud services.
[][3]
However, multi-cloud environments pose challenges for businesses when it comes to maintaining security, policy, and compliance, according to the report's respondents. Companies are dealing with applications that reach hundreds to millions of end users—each one with its own security risk. Meanwhile, many don't have the expertise to protect the apps.
A whopping 71% of companies surveyed by F5 reported a skills gap in security. Only 45% of companies are confident that they're able to secure apps in the public cloud, while 62% think they can protect apps in an on-premises data center. The most confident companies have consistency across multiple architectures and multiple infrastructures, ensuring security and performance of all apps in their portfolio.
### 3\. Automation key to boosting efficiency
Manual processes may have been the norm for legacy networks, but modern networks require automation. That's why most companies (73%) have embraced it.
In this year's report, F5 observed more consistent use of automation in the deployment pipeline than in previous years. Automation of application infrastructure, network, application services, and security is nearly equal across the board at approximately 40% for survey respondents. (See also: [Enterprises being won over by speed, effectiveness of network automation][4])
Interestingly, more companies are choosing open source and continuous integration/continuous delivery (CI/CD) tools for automation over proprietary vendor solutions. The report found there is a need for open ecosystems with the increasing use of CI/CD tools, as businesses search for ways to address problems that slow down automation. Companies said their biggest struggles are with skill gaps in enterprise IT, integrating toolsets across vendors and devices, and the cost of new tools.
### 4\. Security app services are most widely deployed
Modern networks require application services—a pool of services necessary to deploy, run, and secure apps across on-premises or multi-cloud environments. Today, 69% of companies are using 10 or more application services, such as ingress control and service discovery. Ingress control is a relatively new application service that has become essential to companies with high API call volumes. It's one of many examples of the growing adoption of microservices-based apps.
Security services remain as the most widely deployed, with these in particular dominating the top five: SSL VPN and firewall services (81%); IPS/IDS, antivirus, and spam mitigation (77%); load balancing and DNS (68%); web application firewalls (WAF) and DDoS protection (each at 67%).
Over the next 12 months, the evolution of cloud and modern app architectures will continue to shape application services. At the top of the list (41%) is software-defined wide-area networking ([SD-WAN][5]). SD-WAN enables software-based provisioning from the cloud to meet modern application demands. Early SD-WAN deployments focused on replacing costly multi-protocol label switching (MPLS), but there is now greater emphasis on security as a core requirement for SD-WAN.
### 5\. DevOps picks up responsibility for app services
Although IT operations is still primarily responsible for deploying app services, the report revealed a shift taking place from single-function to ops-oriented team structures—such SecOps and DevOps.
The responsibility for securing, optimizing, and managing apps by DevOps teams is growing, fueled by cloud and container-native applications. Compared to just a few years ago, businesses have developed a preference for containers over virtual appliances for app services. Container preference grew from just 6% in 2017 to 18% in 2020, surpassing virtual machines (15%) and hardware (15%).
Regardless of preference, the challenges of modern app architectures call for collaboration between teams. IT operations and DevOps don't have to be mutually exclusive and can work together to address those challenges.
### What's Next?
Senior leaders surveyed in the report see big data analytics coming into play in the next two to five years.
Companies today only use a small portion of their data and aren't taking full advantage of it. In the third phase of the digital transformation, businesses can begin leveraging data captured by apps via artificial intelligence (AI)-powered analytics. The harnessed data can provide valuable insights to improve business processes.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3531448/next-wave-of-digital-transformation-requires-better-security-automation.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.f5.com/state-of-application-services-report
[2]: https://www.networkworld.com/article/3518992/top-10-underused-sd-wan-features.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3530275/enterprises-being-won-over-by-speed-effectiveness-of-network-automation.html
[5]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,81 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (As the networks evolve enterprises need to rethink network security)
[#]: via: (https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
As the networks evolve enterprises need to rethink network security
======
Q&A: John Maddison, executive vice president of products for network security vendor Fortinet, discusses how to deal with network security in the digital era.
D3Damon / Getty Images
_Digital innovation is disrupting businesses. Data and applications are at the hub of new business models, and data needs to travel across the extended network at increasingly high speeds without interruption. To make this possible, organizations are radically redesigning their networks by adopting multi-cloud environments, building hyperscale data centers, retooling their campuses, and designing new connectivity systems for their next-gen branch offices. Networks are faster than ever before, more agile and software-driven. They're also increasingly difficult to secure. To understand the challenges and how security needs to change, I recently talked with John Maddison, executive vice president of products for network security vendor Fortinet._
**ZK: As the speed and scale of data escalate, how do the challenges to secure it change?**
JM: Security platforms were designed to provide things like enhanced visibility, control, and performance by monitoring and managing the perimeter. But the traditional perimeter has shifted from being a very closely monitored, single access point to a highly dynamic and flexible environment that has not only expanded outward but inward, into the core of the network as well.
**[ Also see [What to consider when deploying a next generation firewall][1]. | Get regularly scheduled insights by [signing up for Network World newsletters][2]. ]**
**READ MORE:** [The VPN is dying, long live zero trust][3]
Today's perimeter not only includes multiple access points, the campus, the WAN, and the cloud, but also IoT, mobile, and virtual devices that are generating data, communicating with data centers and manufacturing floors, and literally creating thousands of new edges inside an organization. And with this expanded perimeter, there are a lot more places for attacks to get in. To address this new attack surface, security has to move from being a standalone perimeter solution to being fully integrated into the network.
This convergence of security and networking needs to cover SD-WAN, VPN, Wi-Fi controllers, switching infrastructures, and data center environments something we call security-driven networking. As we see it, security-driven networking is an essential approach for ensuring that security and networking are integrated together into a single system so that whenever the networking infrastructure evolves or expands, security automatically adapts as an integrated part of that environment. And it needs to do this by providing organizations with a new suite of security solutions, including network segmentation, dynamic multi-cloud controls, and [zero-trust network access][3]. And because of the speed of digital operations and the sophistication of today's attacks, this new network-centric security strategy also needs to be augmented with AI-driven security operations.
The perimeter security devices that have been on the market weren't really built to run as part of the internal network, and when you put them there, they become bottlenecks. Customers don't put these traditional security devices in the middle of their networks because they just can't run fast enough. But the result is an open network environment that can become a playground for criminals that manage to breach perimeter defenses. It's why the dwell time for network malware is over six months.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
As you combine networking applications, networking functionality, and security applications together to address this challenge, you absolutely need a different performance architecture. This can't be achieved using the traditional hardware most security platforms rely on.
**ZK: Why can't traditional security devices secure the internal network?**
JM: They simply aren't fast enough. And the ones that come close are prohibitively expensive… For example, internal segmentation not only enables organizations to see and separate all of the devices on their network but also dynamically create horizontal segments that support and secure applications and automated workflows that need to travel across the extended network. Inside the network, you're running at 100 gigs, 400 gigs, that sort of thing. But the interface for a lot of security systems today is just 10 gigs. Even with multiple ports, the device can't handle much more than that without having to spend a fortune… In order to handle today's capacity and performance demands, security needs to be done at network speeds that most security solutions cannot support without specialized content processors.
**ZK: Hyperscale data centers have been growing steadily. What sort of additional security challenges do these environments face?**
JM: Hyperscale architectures are being used to move and process massive amounts of data. A lot of the times, research centers will need to send a payload of over 10 gigabytes one packet that's 10 gigabytes to support advanced rendering and modeling projects. Most firewalls today cannot process these large payloads, also known as elephant flows. Instead, they often compromise on their security to let them flow through. Other hyperscale environment examples include financial organizations that need to process transactions with sub-second latency or online gaming providers that need to support massive numbers of connections per second while maintaining high user experience. … [Traditional security platforms] will never be able to secure hyperscale environments, or even worse, the next generation of ultra-fast converged networks that rely on hyperscale and hyperconnectivity to run things like smart cities or smart infrastructures, until they fundamentally change their hardware.
**ZK: Do these approaches introduce new risks or increase the existing risk for these organizations?**
JM: They do both. As the attack surface expands, existing risks often get multiplied across the network. We actually see more exploits in the wild targeting older vulnerabilities than new ones. But cybercriminals are also building new tools designed to exploit cloud environments and modern data centers. They are targeting mobile devices and exploiting IoT vulnerabilities. Some of these attacks are simply revisions of older, tried and true exploits. But many are new and highly sophisticated. We are also seeing new attacks that use machine learning and rely on AI enhancements to better bypass security and evade detection.
To address this challenge, security platforms need to be broad, integrated, and automated.
Broad security platforms come in a variety of form factors so they can be deployed everywhere across the expanding network. Physical hardware enhancements, such as our [security processing units], enable security platforms to be effectively deployed inside high-performance networks, including hyperscale data centers and SD-WAN environments. And virtualized versions need to support private cloud environments as well as all major cloud providers through thorough cloud-native integration.
Next, these security platforms need to be integrated. The security components built into a security platform need to work together as a single solution ­ not the sort of loose affiliation most platforms provide to enable extremely fast threat intelligence collection, correlation, and response. That security platform also needs to support common standards and APIs so third-party tools can be added and supported. And finally, these platforms need to be able to work together, regardless of their location or form factor, to create a single, unified security fabric. It's important to note that many cloud providers have developed their own custom hardware, such as Google's TPU, Amazon's Inferentia, and Microsoft's Corsica, to accelerate cloud functions. As a result, hardware acceleration on physical security platforms is essential to ensure consistent performance for data moving between physical and cloud environments
And finally, security platforms need to be automated. Support for automated workflows and AI-enhanced security operations can significantly accelerate the speed of threat detection, analysis, and response. But like other processing-intensive functions, such as decrypting traffic for deep inspection, these functions also need specialized and purpose-built processors or they will become innovation-killing bottlenecks.
**ZK: What's next for network security?**
JM: This is just the start. As networking functions begin to converge even further, creating the next generation of smart environments smart buildings, smart cities, and smart critical infrastructures the lack of viable security tools capable of inspecting and protecting these hyperfast, hyperconnected, and hyper-scalable environments will seriously impact our digital economy and way of life.
Security vendors need to understand this challenge and begin investing now in developing advanced hardware and security-driven networking technologies. Organizations aren't waiting for vendors to catch up so they can secure their networks of tomorrow. Their networks are being left exposed right now because the software-based security solutions they have in place are just not adequate. And it's up to the security industry to step up and solve this challenge.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3487720/the-vpn-is-dying-long-live-zero-trust.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,108 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to be the right person for DevOps)
[#]: via: (https://opensource.com/article/20/3/devops-relationships)
[#]: author: (Josh Atwell https://opensource.com/users/joshatwell)
How to be the right person for DevOps
======
Creating healthy relationships is the essential ingredient in DevOps
success.
![Team meeting][1]
In my kitchen, we have a sign that reads "Marriage is more than finding the right person. It is being the right person." It serves as a great reminder of the individual responsibility everyone has in any healthy relationship. As organizations adopt [DevOps][2] as a model of developing and delivering value to customers, the impact of healthy relationships is extremely important for success.
![Marriage sign][3]
Historically, the relationship between development and operations teams has been unhealthy. Poor communication, limited empathy, and a history of mistrust make merging these teams into a tighter operating model challenging, to say the least. This is not entirely unfair to either side.
Developers have long been frustrated by lead times and processes put in place by the operations organization. They just want to work, and they often see operations as an anchor on the ship of progress.
Operations professionals have long been frustrated by the impatience and lack of clear requirements that come from development teams. They are often confused about why those teams are not able to use the available services and processes. They see developers as a liability to their ability to maintain stable services for customers and the business.
A lingering gap here is that each side has been focused on protecting its own perspective. They emphasize how the other team is not being what _they_ need, and they never question whether they, too, could be doing something different.
In DevOps, all sides must frame their role in the organization based on how they add value to others.
There are a few things that everyone, including managers and leaders, can do right away to become a better contributor and partner in their DevOps relationships.
### Communicate
Most professionals in an organization adopting DevOps find themselves needing to work closely with new people, ones they have had limited exposure to in the past. It is important for everyone to take time to get to know their new teammates and learn more about their concerns, their interests, and also their preferred communication style.
Successful communication in new relationships is often built on simply listening more and talking less. Our natural tendency is to talk about ourselves. Most people love sharing what they know best. However, it is extremely important to make more room to listen.
Hearing someone is not the same as listening to them. I'm confident that we have all been in the situation where someone expresses a concern that we do not entirely internalize. Also, merely hearing does not encourage people to shareor to share as completely as they should.
![Stick figures hearing][4]
It is important to listen actively. Repeat what you hear, and seek validation that what you repeat is what they wanted you to understand. Once you understand their concern, it is important to make your initial response a selfless one. Even if you can't completely solve the problem, demonstrate sympathy and help the person move towards a solution.
![Stick figures listening][5]
### Selflessness
Another key relationship challenge as organizations adopt DevOps is developing a perspective of selflessness. In DevOps, most people are responsible for delivering value to a wide variety of other people. Each person should begin by considering how their actions and work impact other people.
This service mindset carries forward when you become more sensitive to when others are in need and then dedicate time in your schedule specifically for the purpose of helping them. This can be as simple as creating a small improvement in a process or helping to troubleshoot an issue. A positive side effect is that this effort will provide you more opportunities to work with others and develop deeper trust.
It is also important not to hoard knowledge—either technical or institutional—especially when people ask questions or seek help. Maintain the mindset that there are no stupid questions.
![Stick figure apologizing][6]
Finally, selflessness includes being trustworthy. It is difficult to maintain a healthy relationship when there is no trust. Be honest and transparent. In IT, this is often seen as a liability, but in DevOps it is a requirement for success.
### Self-care
In order to be a strong contributor to a relationship, it is necessary to maintain a sense of self. Our individuality provides the diversity a relationship needs to grow. Make sure you maintain and share your interests with others. Be more than just the work you do. Apply your interests to your work.
You are no good to others if you are not good to yourself. Healthy relationships are stronger with healthy people. Make sure you take time to enjoy your interests and recharge. Take your vacation and leave work behind!
![Stick figure relaxing][7]
I am also a strong advocate for mental health days. Sometimes our mental health is not sufficient to work effectively. You're not as effective when you are physically ill, and you're not as effective when your head is not 100%. Work with your manager and your team to support each other to maintain good mental health.
Mental health is improved by learning. Invest in yourself and expand your knowledge. DevOps ideally needs "t-shaped" people who have depth on a topic and also broader system knowledge. Work to increase your depth, but balance that by learning new things about your environment. This knowledge can come from your teammates and create operational sympathy.
![Stick figure with open arms][8]
Finally, healthy relationships are not all work and no play. Take time to acknowledge the successes of others. If you know your team, you likely know how individuals prefer to receive praise. Respect those preferences, but always strive to praise vocally where possible.
![Stick figures celebrating][9]
Make sure to celebrate these successes as a team. All work and no play makes everyone dull. Celebrate milestones together as a team, and then articulate and target the next objectives.
### Be the right person
DevOps requires more from every individual, and its success is directly tied to the health of relationships. Each member of the organization should apply these techniques to grow and improve themselves. A focus on being the right person for the team will build stronger bonds and make the organization better equipped to reach its goals.
* * *
_This article is based on [a talk][10] Josh Atwell gave at All Things Open 2019._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/devops-relationships
作者:[Josh Atwell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/joshatwell
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/meeting-team-listen-communicate.png?itok=KEBP6vZ_ (Team meeting)
[2]: https://opensource.com/tags/devops
[3]: https://opensource.com/sites/default/files/uploads/marriage.png (Marriage sign)
[4]: https://opensource.com/sites/default/files/uploads/hearing.png (Stick figures hearing)
[5]: https://opensource.com/sites/default/files/uploads/listening.png (Stick figures listening)
[6]: https://opensource.com/sites/default/files/uploads/apologize.png (Stick figure apologizing)
[7]: https://opensource.com/sites/default/files/uploads/relax.png (Stick figure relaxing)
[8]: https://opensource.com/sites/default/files/uploads/open_0.png (Stick figure with open arms)
[9]: https://opensource.com/sites/default/files/uploads/celebrate.png (Stick figures celebrating)
[10]: https://opensource.com/article/20/1/devops-empathy

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: ('AI everywhere' IoT chips coming from Arm)
[#]: via: (https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
'AI everywhere' IoT chips coming from Arm
======
Two new microprocessors from Arm promise to miniaturize artificial intelligence.
Healthcare
Silicon microchip maker Arm is working on a new semiconductor design that it says will enable machine learning, at scale, on small sensor devices. Arm has completed testing of the technology and expects to bring it to market next year.
Artificial intelligence, implemented locally on "billions and ultimately trillions" of devices is coming, the company says in a [press release][1]. Arm Holdings, owned by Japanese conglomerate Softbank, says its partners have shipped more than 160 billion Arm-based chips to date, and that 45 million of its microprocessor designs are being placed within electronics every day.
The new machine-learning silicon will include micro neural processing units (microNPU) that can be used to identify speech patterns and perform other AI tasks. Importantly, the processing is accomplished on-device and in smaller form factors than have so far been available. The chips don't need the cloud or any network.
[RELATED: Auto parts supplier has big plans for its nascent IoT effort][2]
Arm, which historically has been behind mobile smartphone microchips, is aiming this design  the Cortex M55 processor, paired with the Ethos-U55, Arm's first microNPU at Internet of Things instead.
"Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions, and ultimately trillions of devices," said Dipti Vachani, senior vice president and general manager of Arm's automotive and IoT areas, in a statement. "With these additions to our AI platform, no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications."
Arm wants to take advantage of the autonomous nature of chip-based number crunching, as opposed to doing it in the cloud. Privacy-conscious (and regulated) healthcare is an example of a vertical that might like the idea of localized processing. 
Functioning AI without cloud dependence isn't entirely new. Intel's [Neural Compute Stick 2][3], a $69 self-contained computer vision and deep learning development kit, doesn't need it, for example.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
Arm is also going for power savings with its new AI technology. Not requiring a data network can mean longer battery life for the sensor— only the calculated results need to be sent, rather than every bit. Much of the time, raw sensor data is irrelevant and can be discarded. Arm's new endpoint ML technologies are going to help microcontroller developers "accelerate edge inference in devices limited by size and power," said Geoff Lees, senior vice president of edge processing at IoT semiconductor company [NXP][5], in the announcement. 
Enabling machine learning in power-constrained settings and eliminating the need for network connectivity mean the sensor can be placed where there isn't a hardy power supply. Latency advantages and cost advantages also can come into play.
"These devices can run neural network models on batteries for years, and deliver low-latency inference directly on the device," said Ian Nappier, product manager of TensorFlow Lite for Microcontrollers at Google, in a statement to Arm. [TensorFlow][6] is an open-source machine learning platform that's been used for detecting respiratory diseases, among other things.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.arm.com/company/news/2020/02/new-ai-technology-from-arm
[2]: https://www.networkworld.com/article/3098084/internet-of-things/auto-parts-supplier-has-big-plans-for-its-nascent-iot-effort.html#tk.nww-fsb
[3]: https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[5]: https://www.nxp.com/company/our-company/about-nxp:ABOUT-NXP
[6]: https://www.tensorflow.org/
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Electronics should sweat to cool down, say researchers)
[#]: via: (https://www.networkworld.com/article/3532827/electronics-should-sweat-to-cool-down-say-researchers.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Electronics should sweat to cool down, say researchers
======
Scientists think that in much the same way the human body releases perspiration to cool down, special materials might release water to draw heat from electronics.
rclassenlayouts / Aleksei Derin / Getty Images
Computing devices should sweat when they get too hot, say scientists at Shanghai Jiao Tong University in China, where they have developed a materials application they claim will cool down devices more efficiently and in smaller form-factors than existing fans.
Its “a coating for electronics that releases water vapor to dissipate heat from running devices,” the team explain in a news release. “Mammals sweat to regulate body temperature,” so should electronics, they believe.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
The groups focus has been on studying porous materials that can absorb moisture from the environment and then release water vapor when warmed. MIL-101(Cr) checks the boxes, they say. The material is a metal organic framework, or MOF, which is a sorbent, a material that stores large amounts of water. The higher the water capacity one has, the greater the dissipation of heat when it's warmed.
MOF projects have been attempted before. “Researchers have tried to use MOFs to extract water from the desert air,” says refrigeration-engineering scientist Ruzhu Wang, who is senior author of a paper on the universitys work that has just been [published in Joule][2].
Their proof-of-concept test involved applying a micrometers-thin coating of MIL-101(Cr) to metallic substrates that resulted in temperature drops of up to 8.6 degrees Celsius for 25 minutes, according to the abstract for their paper.
Thats “a significant improvement compared to that of traditional PCMs,” they say. Phase change materials (PCM) include waxes and fatty acids that are used in electronics and melt to absorb heat. They are used in smartphones, but the solid-to-liquid transition doesnt exchange all that much energy.
“In contrast, the liquid-vapor transition of water can exchange 10 times the energy compared to that of PCM solid-liquid transition.” Plus the material used recovers almost immediately to start sweating again, just like a mammal.
[][3]
Shanghai Jiao Tong University isnt the only school looking into sweat for future tech. Cornell University says it wants to get robots to sweat to bring their temperature below ambient. Researchers there say they have built a 3D-printed, sweating robot muscle. It [manages its own temperature][4], and they think it will one day let robots run for extended periods without overheating.
Soft robots, which are the kind preferred by many developers for their flexibility, hold more heat than metal ones. As in electronic devices such as smartphones and IoT sensors, fans arent ideal because they take up too much space. Thats why new materials applications are being studied.
The Cornell robot group uses light to cure resin into shapes that control the flow of heat. A base layer “smart sponge” made of poly-N-isopropylacrylamide retains water and squeezes it through fabricated, dilated pores when heated. The pores then close automatically when cooled.
“Just when it seemed like robots couldnt get any cooler,” the group says.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3532827/electronics-should-sweat-to-cool-down-say-researchers.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.cell.com/joule/fulltext/S2542-4351(19)30590-2
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[4]: https://news.cornell.edu/stories/2020/01/researchers-create-3d-printed-sweating-robot-muscle
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,74 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns of five SD-WAN security weaknesses)
[#]: via: (https://www.networkworld.com/article/3533550/cisco-warns-of-five-sd-wan-security-weaknesses.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns of five SD-WAN security weaknesses
======
Cisco warnings include three high-impact SD-WAN vulnerabilities
[Jaredd Craig][1] [(CC0)][2]
Cisco has issued five  warnings about security weaknesses in its [SD-WAN][3] offerings, three of them on the high-end of the vulnerability scale.
The worst problem is with the command-line interface (CLI) of its [SD-WAN][4] Solution software where a weakness could let a local attacker inject arbitrary commands that are executed with root privileges, Cisco [wrote.][5]
[[Get regularly scheduled insights by signing up for Network World newsletters.]][6]
An attacker could exploit this vulnerability which has a 7.8 out if 10 on  the Common Vulnerability Scoring System by authenticating to the device and submitting crafted input to the CLI utility. The attacker must be authenticated to access the CLI utility. The vulnerability is due to insufficient input validation, Cisco wrote.
Another high warning problem lets an authenticated, local attacker elevate privileges to root on the underlying operating system.  An attacker could exploit this vulnerability by sending a crafted request to an affected system. A successful exploit could allow the attacker to gain root-level privileges, Cisco [wrote][7].  The vulnerability is due to insufficient input validation.
The third high-level vulnerability in the SD-WAN Solution software could let an attacker cause a buffer overflow on an affected device. An attacker could exploit this vulnerability by sending crafted traffic to an affected device. A successful exploit could allow the attacker to gain access to information that they are not authorized to access and make changes to the system that they are not authorized to make, Cisco [wrote][8].
The vulnerabilities affect a number of Cisco products if they are running a Cisco SD-WAN Solution software release earlier than Release 19.2.2: vBond Orchestrator Software, vEdge 100-5000 Series Routers, vManage Network Management System and vSmart Controller Software.
Cisco said there were no workarounds for any of the vulnerabilities and it suggested users accept automatic software updates to allay exploit risks. There are [software fixes for the problems][9] as well. 
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
All three of the high-level warnings were reported to Cisco by the Orange Group, Cisco said.
The other two SD-WAN Solution software warnings with medium threat levels -- include a one that allows a cross-site scripting (XSS) attack against the web-based management interface of the vManage software and SQL injection threat.
The [XXS vulnerability][11] is due to insufficient validation of user-supplied input by the web-based management interface. An attacker could exploit this vulnerability by persuading a user of the interface to click a crafted link. A successful exploit could allow the attacker to execute arbitrary script code in the context of the interface or to access sensitive, browser-based information.
The SQL vulnerability exists because the web UI improperly validates SQL values. An attacker could exploit this vulnerability by authenticating to the application and sending malicious SQL queries to an affected system. A successful exploit could let the attacker  modify values on, or return values from, the underlying database as well as the operating system, Cisco [wrote][12].
Cisco recognized Julien Legras and Thomas Etrillard of Synacktiv for reporting the problems.
The company said release 19.2.2 of the [Cisco SD-WAN Solution][13] contains fixes for all five vulnerabilities.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3533550/cisco-warns-of-five-sd-wan-security-weaknesses.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/T15gG5nA9Xk
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwclici-cvrQpH9v
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwpresc-ySJGvE9
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwanbo-QKcABnS2
[9]: https://software.cisco.com/download/home
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20200318-vmanage-xss
[12]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20200318-vmanage-cypher-inject
[13]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/sd-wan/index.html#~benefits
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -1,85 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How technical debt is risking your security)
[#]: via: (https://opensource.com/article/20/3/remove-security-debt)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
How technical debt is risking your security
======
A few security fixes now will help lighten the load of future developers
using your software.
![A lock on the side of a building][1]
Everyone knows they shouldn't take shortcuts, especially in their work, and yet everyone does. Sometimes it doesn't matter, but when it comes to code development, though, it definitely does.
As any experienced programmer knows, building your code the quick and dirty way soon leads to problems down the line. These issues might not be disastrous, but they incur a small penalty every time you want to develop your code further.
This is the basic idea behind [technical debt][2], a term first coined by well-known programmer Ward Cunningham. Technical debt is a metaphor that explains the long-term burden developers and software teams incur when taking shortcuts, and has become a popular way to think about the extra effort that we have to do in future development because of the quick and dirty design choice.
"Security Debt" is an extension of this idea, and in this article, we'll take a look at what the term means, why it is a problem, and what you can do about it.
### What is security debt?
To get an idea of how security debt works, we have to consider the software development lifecycle. Today, it's very rare for developers to start with a blank page, even for a new piece of software. At the very least, most programmers will start a new project with open source code copied from online repositories.
They will then adapt and change this code to make their project. While they are doing this, there will be many points where they notice a security vulnerability. Something as simple as an error establishing a database connection can be an indication that systems are not playing well together, and that someone has taken a fast and dirty approach.
Then they have two options: they can either take an in-depth look at the code they are working with, and fix the issue at a fundamental level, or they can quickly paste extra code over the top that gets around the problem in a quick, inefficient way.
Given the demands of today's development environment, most developers choose the second route, and we can't blame them. The problem is that the next person who looks at the code is going to have to spend longer working out how it operates.
Time, as we all know, is money. Because of this, each time software needs to be changed, there will be a small cost to make it secure due to previous developers taking shortcuts. This is security debt.
### How security debt threatens your software
There was a time when security debt was not a huge problem, at least not in the open source community. A decade ago, open source components had lifetimes measured in years and were freely available to everyone.
This meant that security issues in legacy code got fixed. Today, the increased speed of the development lifecycle and the increasingly censored internet means that developers can no longer trust third party code to the degree they used to.
This has led to a considerable increase in security debt for developers using open source components. Veracode's latest [State of Software Security (SOSS)][3] report found that security issues in open source software take about a month longer to be fixed than those in software that is sourced internally. Insourced software recorded the highest fix rates, but even software sourced from external contractors gets fixed faster, by about two weeks, than open source software.
The ultimate outcome of this and one that the term "security debt" captures very well is that most companies currently face security vulnerabilities throughout their entire software stack, and these are accumulating faster than they are fixed. In other words, developers have maxed out their security debt credit card, and are drowning in the debt they've incurred. This is particularly concerning when you consider that total household debt [reached nearly $14 trillion][4] in the United States alone in 2019.
### How to avoid security debt
Avoiding a build-up of security debt requires that developers take a different approach to security than the one that is prevalent in the industry at the moment. Proven methods such as zero-knowledge cloud encryption, VPNs to promote online anonymity, and network intrusion prevention software are great, but they may also not be enough.
In fact, there might have been some developers who were scratching their heads during our definition of security debt above: how many of us think about the next poor soul who will have to check our code for security flaws?
Changing that way of thinking is key to preventing a build-up of security debt. Developers should take the time to thoroughly [check their software for security vulnerabilities][5], not just during development, but after the release as well. Fix any errors now, rather than waiting for security holes to build up.
If that instruction sounds familiar, then well done. A continuity approach to software development is a critical component of [layering security through DevOps][6], and one of the pillars of the emerging discipline of DevSecOps. Along with [chaos engineering][7], these approaches seek to integrate security into development, testing, and assessment processes, and thereby prevent a build-up of security debt.
Just like a credit card, the key to avoiding security debt getting out of control is to avoid the temptation to take shortcuts in the first place. That's easier said than done, of course, but one of the key lessons from recent data breaches is that legacy systems that many developers assume are secure are just as full of shortcuts as recently written code.
### Measure twice, cut once
Since [security by default hasn't arrived yet][8], we must all try and do things properly in the future. Taking the fast, dirty approach might mean that you get to leave the office early, but ultimately that decision will come back to bite you.
If you finish early anyway, well done: you can use the time to read [our best articles on security][9] and check whether your code is as secure as you think it is.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/remove-security-debt
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/17/10/why-i-love-technical-debt
[3]: https://www.veracode.com/state-of-software-security-report
[4]: https://thetokenist.io/financial-statistics/
[5]: https://opensource.com/article/17/6/3-security-musts-software-developers
[6]: https://opensource.com/article/19/9/layered-security-devops
[7]: https://www.infoq.com/articles/chaos-engineering-security-networking/
[8]: https://opensource.com/article/20/1/confidential-computing
[9]: https://opensource.com/article/19/12/security-resources

View File

@ -1,110 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 metrics to measure your open source community health)
[#]: via: (https://opensource.com/article/20/3/community-metrics)
[#]: author: (Kevin Xu https://opensource.com/users/kevin-xu)
3 metrics to measure your open source community health
======
Community building is critical to the success of any open source
project. Here's how to evaluate your community's health and strengthen
it.
![Green graph of measurements][1]
Community building is table stakes in the success of any open source project. Even outside of open source, community is considered a competitive advantage for businesses in many industries—from retail, to gaming, to fitness. (For a deeper dive, see "[When community becomes your competitive advantage][2]" in the _Harvard Business Review_.)
However, open source community building—especially offline activities—is notoriously hard to measure, track, and analyze. While we've all been to our fair share of meetups, conferences, and "summits" (and probably hosted a few of them ourselves), were they worth it? Did the community meaningfully grow? Was printing all those stickers and swags worth the money? Did we collect and track the right numbers to measure progress?
To develop a better framework for measuring community, we can look to a different industry for guidance and fresh ideas: political campaigns.
### My metrics start with politics
I started my career in political campaigns in the US as a field organizer (aka a low-level staffer) for then-candidate Senator Obama in 2008. Thinking back, a field organizer's job is basically community building in a specifically assigned geographical area that your campaign needs to win. My day consisted of calling supporters to do volunteer activities, hosting events to gather supporters, bringing in guest speakers (called "surrogates" in politics) to events, and selling the vision and plan of our candidate (essentially our "product").
Another big chunk of my day was doing data entry. We logged everything: interactions on phone conversations with voters, contact rates, event attendance, volunteer recruitment rates, volunteer show-up rates, and myriad other numbers to constantly measure our effectiveness.
Regardless of your misgivings about politics in general or specific politicians, the winning campaigns that lead to political victories are all giant community-building exercises that are data-driven, meticulously measured, and constantly optimized. They are well-oiled community-building machines.
When I entered the world of open source a few years ago, the community-building part felt familiar and natural. What surprised me was how little community building as an operation is quantified and measured—especially with offline activities.
### Three metrics to track
Taking a page from the best-run political campaigns I've seen, here are the three most important metrics for an open source community to track and optimize:
* Number of **community ambassadors**
* Number of **return attendees** (people who attend your activities two times or more)
* Rate of **churned attendees** (the percentage of people who attend your activities only once or say they will come but don't show up)
If you're curious, the corresponding terms on a political campaign for these three metrics are typically community captains, super volunteers, and flake rate.
#### Community ambassadors
A "community ambassador" is a user or enthusiast of your project who is willing to _consistently_ host local meetups or activities where she or he lives. Growing the number of community ambassadors and supporting them with resources and guidance are core to your community's strength and scale. You can probably hire for these if you have a lot of funding, but pure volunteers speak more to your project's allure.
These ambassadors should be your best friends, where you understand inside and out why they are motivated to evangelize your project in front of both peers and strangers. Their feedback on your project is also valuable and should be a critical part of your development roadmap and process. You can strategically cultivate ambassadors in different tech hubs geographically around the world, so your project can count on someone with local knowledge to reach and serve users of different business cultures with different needs. The beauty of open source is that it's global by default; take advantage of it!
Some cities are arguably more of a developer hub than others. Some to consider are Amsterdam, Austin, Bangalore, Beijing, Berlin, Hangzhou, Istanbul, London, NYC, Paris, Seattle, Seoul, Shenzhen, Singapore, São Paulo, San Francisco-Bay Area, Vancouver, Tel Aviv, Tokyo, and Toronto (listed alphabetically and based on feedback I got through social media. Please add a comment if I missed any!). An example of this is the [Cloud Native Ambassadors program][3] of the Cloud Native Computing Foundation.
#### Return attendees
The number of return attendees is crucial to measuring the usefulness or stickiness of your community activities. Tracking return attendees is how you can draw a meaningful line between "the curious" and "the serious."
Trying to grow this number should be an obvious goal. However, that's not the only goal. This is the group whose motivation you want to understand the clearest. This is the group that reflects your project's user persona. This is the group that can probably give you the most valuable feedback. This is the group that will become your future community ambassadors.
Putting it differently, this is your [1,000 true fans][4] (if you can keep them).
Having hosted and attended my fair share of these community meetups, my observation is that most people attend to be educated on a technical topic, look for tools to solve problems at work, or network for their next job opportunity. What they are not looking for is being "marketed to."
There is a growing trend of developer community events becoming marketing events, especially when companies are flush with funding or have a strong marketing department that wants to "control the message." I find this trend troubling because it undermines community building.
Thus, be laser-focused on technical education. If a developer community gets taken over by marketing campaigns, your return-attendees metric won't be pretty.
#### Churned attendees rate
Tracking churned attendees is the flipside of the returned-attendees coin, so I won't belabor the point. These are the people that join once and then disappear or who show interest but don't show up. They are important because they tell you what isn't working and for whom, which is more actionable than just counting the people who show up.
One note of caution: Be brutally honest when measuring this number, and don't fool yourself (or others). On its own, if someone signs up but doesn't show up, it doesn't mean much. Similarly, if someone shows up once and never comes back, it doesn't mean much. Routinely sit down and assess _why_ someone isn't showing up, so you can re-evaluate and refine your community program and activities. Don't build the wrong incentives into your community-building operation to reward the wrong metric.
### Value of in-person connections
I purposely focused this post on measuring offline community activities because online activities are inherently more trackable and intuitive to digital-native open source creators.
Offline community activities are essential to any project's journey to reaching traction and prominence. I have yet to see a successful project that does not have a sizable offline presence, regardless of its online popularity.
Why is this the case? Why can't an open source community, usually born online, just stay and grow online?
Because technology choice is ultimately a human decision; therefore, face-to-face interaction is an irreplaceable element of new technology adoption. No one wants to be the guinea pig. No one wants to be the first. The most effective way to not feel like the first is to literally _see_ other human beings trying out or being interested in the same thing.
Being in the same room as other developers, learning about the same project, and doing that regularly is the most effective way to build trust for a project. And with trust comes traction.
### These three metrics work
There are other things you _can_ track, but more data does not necessarily mean clearer insight. Focusing your energy on these three metrics will make the most impact on your community-building operation. An open source community where the _number of ambassadors and return attendees are trending up_ and the _churned attendees rate is trending down_ is one that's healthy and growing in the right way.
* * *
_This article originally appeared on_ _[COSS Media][5]_ _and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/community-metrics
作者:[Kevin Xu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kevin-xu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements)
[2]: https://hbr.org/2020/01/when-community-becomes-your-competitive-advantage
[3]: https://www.cncf.io/people/ambassadors/
[4]: https://kk.org/thetechnium/1000-true-fans/
[5]: https://coss.media/how-to-measure-community-building/

View File

@ -1,107 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Coronavirus challenges capacity, but core networks are holding up)
[#]: via: (https://www.networkworld.com/article/3533438/coronavirus-challenges-capacity-but-core-networks-are-holding-up.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Coronavirus challenges capacity, but core networks are holding up
======
COVID-19 has sent thousands of employees to work from home, placing untold stress on remote-access networks.
Cicilie Arcurs / Getty Images
As the coronavirus continues to spread and more people work from home, the impact of the increased traffic on networks in the US so far seems to be minimal.
No doubt that web, VPN and data usage is growing dramatically with the [influx of remote workers][1].  For example, Verizon said it has seen a 34% increase in VPN traffic from March 10 to 17.  It has also seen a 75% increase in gaming traffic, and web traffic increased by just under 20% in that time period, according to Verizon.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
Verizon said its fiber optic and wireless networks “have been able to meet the shifting demands of customers and continue to perform well. In small pockets where there has been a significant increase in usage, our engineers have quickly added capacity to meet customers demand.”
“As we see more and more individuals work from home and students engage in online learning, it is a natural byproduct that we would see an increase in web traffic and access to VPN. And as more entertainment options are cancelled in communities across the US, an increase in video traffic and online gaming is not surprising,” said Kyle Malady, Chief Technology Officer for Verizon in a [statement][3]. “We expect these peak hour percentages to fluctuate, so our engineers are continuing to closely monitor network usage patterns 24x7 and stand ready to adjust resources as changing demands arise."
As of March 16, AT&amp;T said that its network continues to perform well. “In cities where the coronavirus has had the biggest impact, we are seeing fewer spikes in wireless usage around particular cell towers or particular times of day because more people are working from home rather than commuting to work and fewer people are gathering in large crowds at specific locations.”
In Europe, Vodaphone say it has seen an 50% increase in data traffic in some markets.
“COVID-19 is already having a significant impact on our services and placing a greater demand on our network,” the company said in a statement. “Our technology teams throughout Europe have been focusing on capacity across our networks to make sure they are resilient and can absorb any new usage patterns arising as more people start working from home.”
[][4]
In Europe there have also been indications of problems. 
Ireland-based [Spearline][5], which monitors international phone numbers for connectivity and audio quality, said this week that Italy's landline connection rate continues to be volatile with as much as a 10% failure rate and audio quality is running approximately 4% below normal levels. 
Other Spearline research says:
* Spain saw a drop in connection rates to 98.5% on March 16, but it is improving again.
* France saw a dip in connection rates approaching 5% on March17. Good quality has been maintained overall, though periodic slippage has been observed.
* Germany saw a 1.7% connection-failure rate March 17.  Good quality has been maintained, though periodic slippage has been observed.
Such problems are not showing up in the US at this point, Spearline said.
“The US is Spearline's most tested market with test calls over three main sectors being enterprise, unified communications and carrier. To date, there has been no significant impact on either the connection rates or the audio quality on calls throughout the US,” said Matthew Lawlor, co-founder and chief technical officer at Spearline. 
The future impact is the real unknown of course, Lawlor said.
“There are many potential issues which have happened in other countries which may have a similar impact on US infrastructure. For example, in many countries there have been hardware issues where engineers are unable to get physical access to resolve the issue,” Lawlor said.   “While rerouting calls may help resolve issues it does put more pressure on other segments of your network.”
On March 19, one week after the CDC declarated the virus as pandemic, data analytics and broadband vendor OpenVault wrote:
* Subscribers average usage from 9 a.m. to 5 p.m. has risen to 6.3 GB, 41.4% higher than the January figure of 4.4 GB. 
* During the same period, peak-hour (6 p.m. to 11 p.m.) usage has risen 17.2% from 5.0 GB per subscriber in January to 5.87 GB in March. 
* Overall daily usage has grown from 12.19 GB to 15.46 GB, an increase of 26.8%.
Based on the current rate of growth, OpenVault projected that consumption for March will reach nearly 400 GB per subscriber, an increase of almost 11% over the previous monthly record of 361 GB, established in January. In addition, OpenVault projects a new coronavirus-influenced run rate of 460 GB per subscriber per month going forward. OpenVaults research is based on the usage of more than 1 million broadband subscribers through the United States, the company said.
“Broadband clearly is keeping the hearts of business, education and entertainment beating during this crisis,” said Mark Trudeau, CEO and founder of OpenVault in a  [statement][6]. “Networks built for peak-hours consumption so far are easily handling the rise in nine-to-five business-hours usage. Weve had concerns about peak hours consumption given the increase in streaming entertainment and the trend toward temporary cessation of bandwidth caps, but operator networks seem to be handling the additional traffic without impacting customer experiences.”
Increased use of conferencing apps may affect their availability for reasons other than network capacity. For example,  according to Thousand Eyes, users around the globe were unable to connect to their Zoom meetings for approximately 20 minutes on Friday due to failed DNS resolution. 
Others too are monitoring data traffic looking for warning signs of slowdowns.  “Traffic towards video conferencing, streaming services and news, e-commerce websites has surged. We've seen growth in traffic from residential broadband networks, and a slowing of traffic from businesses and universities," wrote Louis Poinsignon a network engineer with CloudFlare in a [blog][7] about Internet traffic patterns. He noted that on March 13 when the US announced a state of emergency, CloudFlares US data centers served 20% more traffic than usual.
Poinsignon  noted that  [Internet Exchange Points][8], where Internet service providers and content providers can exchange data directly (rather than via a third party) have also seen spikes in traffic. For example, Amsterdam ([AMS-IX][9]), London ([LINX][10]) and Frankfurt ([DE-CIX][11]), a 10-20% increase was seen around March 9.
“Even though from time to time individual services, such as a web site or an app, have outages, the core of the Internet is robust,” Poinsignon wrote.  “Traffic is shifting from corporate and university networks to residential broadband, but the Internet was designed for change.”
In related news:
* Netflix said it would reduce streaming quality in Europe for at least the next 30 days to prevent the internet collapsing under the strain of unprecedented usage due to the coronavirus pandemic. "We estimate that this will reduce Netflix traffic on European networks by around 25% while also ensuring a good quality service for our members," Netflix said.
* DISH announced that it is providing 20 MHz of AWS-4 (Band 66) and all of its 700 MHz spectrum to AT&amp;T at no cost for 60 days. Last week, DISH began lending its complete 600 MHz portfolio of spectrum to T-Mobile. With these two agreements, DISH has activated most of its spectrum portfolio to enhance national wireless capacity as the nation confronts the COVID-19 crisis.
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3533438/coronavirus-challenges-capacity-but-core-networks-are-holding-up.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.verizon.com/about/news/how-americans-are-spending-their-time-temporary-new-normal
[4]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[5]: https://www.spearline.com/
[6]: http://openvault.com/covid-19-impact-driving-business-hours-broadband-consumption-up-41/
[7]: https://blog.cloudflare.com/on-the-shoulders-of-giants-recent-changes-in-internet-traffic/
[8]: https://en.wikipedia.org/wiki/Internet_exchange_point
[9]: https://www.ams-ix.net/ams/documentation/total-stats
[10]: https://portal.linx.net/stats/lans
[11]: https://www.de-cix.net/en/locations/germany/frankfurt/statistics
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -1,96 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (COVID-19: Weekly health check of ISPs, cloud providers and conferencing services)
[#]: via: (https://www.networkworld.com/article/3534130/covid-19-weekly-health-check-of-isps-cloud-providers-and-conferencing-services.html)
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
COVID-19: Weekly health check of ISPs, cloud providers and conferencing services
======
ThousandEyes, which tracks internet and cloud traffic, is providing Network World with weekly updates on the performance of three categories of service provider: ISP, cloud provider, UCaaS
[ThousandEyes][1]
_As COVID-19 continues to spread, forcing employees to work from home, the services of ISPs, cloud providers and conferencing services a.k.a. unified communications as a service (UCaaS) providers are experiencing increased traffic._
_Thousand Eyes is monitoring how these increases affect outages and the performance these providers undergo. It will provide Network World a roundup of interesting events of  the week in the delivery of these services, and Network World will provide a summary here. Stop back next week for another update._
With the increased use of remote-access VPNs, major carriers are reporting dramatic increases in their network traffic with Verizon reporting a 20% week-over-week increase, and Vodafone reporting an increase of 50%.
[Read about IPv6 and cloud-access security brokers][2]
While there has been no corresponding spike in outages in service provider networks, over the past six weeks there has been a steady increase in outages across multiple provider types both worldwide and in the U.S., all according to ThousandEyes, which keeps track of internet and cloud traffic.
### IDG Special Report:
Navigating the Pandemic
* [Business continuity: Coronavirus crisis puts CIOs plans to the test][3]
* [Coronavirus challenges remote networking][4]
* [A security guide for pandemic planning: 7 key steps][5]
* [10 tips to set up your home office for videoconferencing][6]
* [How to survive and thrive while working from home][7]
* [WTH? OSS knows how to WFH IRL][8]
This includes “a concerning upward trajectory” since the beginning of March of ISP outages worldwide that coincides with the spread of COVID-19, [according to a ThousandEyes blog][9] by Angelique Medina, the companys director of product marketing. ISP outages worldwide hovered around 150 per week between Feb. 10 and March 19, but then increased to between just under 200 and about 225 during the following three weeks.
In the U.S. those numbers were a little over 50 in the first time range and reaching about 100 during the first week of March. “That early March level has been mostly sustained over the last couple of weeks,” Medina writes.
Cogent Communications was one ISP with nearly identical large scale outages on March 11 and March 18, with “disruptions for the fairly lengthy period (by Internet standards) of 30 minutes,” she wrote.
[][10]
Hurricane Electric suffered an outage March 20 that was less extensive and shorter than Cogents but included smaller disruptions that altogether affected hundreds of sites and services, she wrote.
Public-cloud provider networks have withstood the effects of COVID-19 well, with slight increases in the number of outages in the U.S., but otherwise relatively level around the world. The possible reason: “Major public cloud providers, such as AWS, Microsoft Azure, and Google Cloud, have built massive global networks that are incredibly well-equipped to handle traffic surges,” Medina wrote. And when these networks do have major outages its due to routing or infrastructure state changes, not traffic congestion.
Some providers of collaboration applications the likes of Zoom, Webex, MSFT Teams, RingCentral also experienced performance problems between March 9 and March 20. ThousandEyes doesnt name them, but does list performance numbers for what it describes “the top three” UCaaS providers. One actually showed improvements in availability, latency, packet loss and jitter. The other two “showed minimal (in the grand scheme of things) degradations on all fronts — not surprising given the unprecedented strain theyve been under,” according to the blog.
Each provider showed spikes in traffic loss over the time period that ranged from less than 1% to more than 4% in one case. In the case of one provider, “outages within its own network spiked last week, meaning that the network issues impacting users were taking place on infrastructure managed by the provider versus an external ISP.”
“Outage incidents within large UCaaS provider networks are fairly infrequent; however, the recent massive surge in usage is clearly stressing current design limits. Capacity is reportedly being added across the board to meet new service demands,” according to the blog.
Meanwhile, ThousandEyes has introduced a new feature on its site a [Global Internet Outages Map][1] that is updated every few minutes. It shows recent and ongoing outages
## Google outage unrelated to COVID-19
On March 26 Google suffered a 20 minute outage on the East Coast of the U.S., apparently from a router failure in Atlanta, ThousandEyes said, agreeing with a statement put out by Googe to that effect.
That problem affected other regions of the U.S. as evidenced by Google sites such as google.com intermittently returning server errors."These 500 server errors are consistent with an inability to reach the backend systems necessary to correctly load various services," ThousandEyes said in a statement. "Any traffic traversing the affected region — connecting from Googles front-end servers to backend services — may have been impacted and seen the resulting server errors."
ThousandEyes posted interactive results of tests it ran about the outage [here][11] and [here][12].
.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534130/covid-19-weekly-health-check-of-isps-cloud-providers-and-conferencing-services.html
作者:[Tim Greene][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Tim-Greene/
[b]: https://github.com/lujun9972
[1]: https://www.thousandeyes.com/outages
[2]: https://www.networkworld.com/article/3391380/does-your-cloud-access-security-broker-support-ipv6-it-should.html
[3]: https://www.cio.com/article/3532899/business-continuity-coronavirus-crisis-puts-cios-plans-to-the-test.html
[4]: https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html
[5]: https://www.csoonline.com/article/3528878/a-security-guide-for-pandemic-planning-7-key-steps.html
[6]: https://www.computerworld.com/article/3250684/10-tips-to-set-up-your-home-office-for-videoconferencing.html
[7]: https://www.computerworld.com/article/3532283/how-to-survive-and-thrive-while-working-from-home.html
[8]: https://www.infoworld.com/article/3533050/wth-oss-knows-how-to-wfh-irl.html
[9]: https://blog.thousandeyes.com/internet-health-during-covid-19/
[10]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[11]: https://agisi.share.thousandeyes.com/view/endpoint-agent/?roundId=1585237800&metric=loss&scenarioId=eyebrowNetwork&filters=%7B%22filters%22:%7B%22domain%22:%5B%22google.com%22%5D,%22geonameId%22:%5B4148757,4180439,4459467,4460243,4509177,4671240,4744709,4744870,4887398,4890864,4930956,5099836,5110266,5110302,5128581,5145476,5150529,5282804,5786882%5D%7D%7D&page=0,0&grouping=BY_NETWORK,BY_DOMAIN
[12]: https://ythkurgdz.share.thousandeyes.com/view/tests/?roundId=1585236900&metric=availability&scenarioId=httpServer&testId=1283781
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -1,55 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (COVID-19 vs. Raspberry Pi: Researchers bring IoT technology to disease detection)
[#]: via: (https://www.networkworld.com/article/3534101/covid-19-vs-raspberry-pi-researchers-bring-iot-technology-to-disease-detection.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
COVID-19 vs. Raspberry Pi: Researchers bring IoT technology to disease detection
======
Researchers from UMass say that a Raspberry Pi edge device can help identify flu-like symptoms in crowds, broadening the range of tools that can be used to track the spread of disease.
[Bill Oxford / Raspberry Pi / Modified by IDG Comm.][1] [(CC0)][2]
An [IoT][3] device that tracks coughing and crowd size in real time could become a useful tool for identifying the presence of flu-like symptoms among large groups of people, according to a team of researchers at UMass Amherst.
FluSense, as the researchers call it, is about the size of a dictionary. It contains a cheap microphone array, a thermal sensor, a Raspberry Pi and an Intel Movidius 2 neural computing engine. The idea is to use AI at the edge to classify audio samples and identify the number of people in a room at any given time.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
Since the system can distinguish coughing from other types of non-speech audio, correlating coughing with the size of a given crowd could give a useful index of how many people are likely to be experiencing flu-like symptoms.
A test run between December 2018 and July 2019 saw FluSense installed in four waiting rooms at UMass University Health Services clinic, and the researchers said that they were able to “strongly” correlate the systems results with clinical testing for influenza and other illnesses with similar symptoms.
And bigger plans for FluSense are afoot, according to the papers lead author, Ph.D student Forsad Al Hossain and his co-author and adviser, assistant professor Tauhidur Rahman.
“[C]urrently we are planning to deploy the FluSense system in several large public spaces (e.g., large cafeteria, classroom, dormitories, gymnasium, auditorium) to capture syndromic signals from a broad range of people who live in a certain town or city,” they said. “We are also looking for funding to run a large-scale multi-city trial. In the meantime, we are also diversifying our sensing capability by extending FluSenses capability to capture more syndromic signals (e.g., recently we added sneeze sensing capability to FluSense). We definitely see a significant level of commercialization potential in this line of research.”
FluSense is particularly interesting from a technical perspective because all of the meaningful processing work is done locally, via the Intel neural computing engine and Raspberry Pi. Symptom information is sent wirelessly to the lab for collation, of course, but the heavy lifting is accomplished at the edge. Al Hossain and Rahman were quick to emphasize that the device doesnt collect personally identifiable information the emphasis is on aggregating data in a given setting, rather than identifying sickness in any single patient and everything it does collect is heavily encrypted, making it a minimal privacy concern.
The key point of FluSense, according to the researchers, is to think of it as a health surveillance tool, rather than a piece of diagnostic equipment. Al Hossain and Rahman said that it has several important advantages over other health surveillance techniques, particularly those based on Internet tracking, like Google Flu Trend and Twitter.
“FluSense is not easily influenced by public health campaigns or advertisements. Also, the contactless nature of this sensor is ideal to capture syndromic signals passively from different geographical locations and different socioeconomic groups (including underprivileged who may not have access to healthcare and may not go to a doctor/clinic,” they said.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534101/covid-19-vs-raspberry-pi-researchers-bring-iot-technology-to-disease-detection.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/tR0PPLuN6Pw
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,104 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The coming together of SD-WAN and AIOps)
[#]: via: (https://www.networkworld.com/article/3533437/the-coming-together-of-aiops-and-sd-wan.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
The coming together of SD-WAN and AIOps
======
SD-WAN delivers cost and resiliency benefits. Infusing AI into SD-WAN takes things further, enabling automated operations and business agility.
kohb / Getty Images
Software-defined wide-area networking ([SD-WAN][1]) and AIOps are both red-hot technologies. SD-WANs increase application availability, reduce costs and in some cases improve performance. AIOps infuses machine learning into IT operations to increase the level of automation. This reduces errors and enables businesses to make changes at digital speeds. Most think of these as separate technologies, but the two are on a collision course and will give rise to what I'm calling the AI-WAN. 
### SD-WAN not a panacea for all network woes
SD-WAN is the biggest leap forward in networking since… well, the actual WAN. But many solutions still rely on manual configurations. SD-WANs certainly increase application resiliency, lower telecommunications costs, and often increase application performance, but they are more complicated than traditional WANs. Initial setup can be a challenge, but the bigger issue is ongoing operations. Manually tweaking and tuning the network to adapt to business changes can be time consuming and error-prone. A solution is needed to bring better automation to SD-WANs.
**READ MORE:** [How AI can improve network capacity planning][2]
Enter AI-WAN. Much like a self-driving car, an AI-WAN can make decisions based on different rules and adapt to changes faster than people can. Self-driving cars continually monitor road conditions, speed limits and other factors to determine what changes to make. Similarly, a self-driving network can monitor, correct, defend, and analyze with minimal to no human involvement. This is done through automation capabilities powered by AI, obviating the need for people to get involved.
Make no mistake, manual operations will hold businesses back from reaching their full potential. An interesting data point from my research is that it takes enterprises an average of four months to make changes across a network. That's because maintaining legacy networks and fixing glitches takes too much time. One ZK Research study found 30% of engineers spend at least one day a week doing nothing but troubleshooting problems. SD-WANs can improve these metrics, but there's still a heavy people burden.
With growing data challenges businesses face as they migrate to the cloud, they simply can't afford to wait that long. Instead of being afraid of AI taking over jobs, businesses should embrace it. AI can remove human error—which is the largest cause of unplanned network downtime—and help businesses focus on higher-level tasks instead.
### AI-WAN will transform network operations
So how will the evolution of SD-WAN into AI-WAN transform network management and operations? Administrators can use their time to focus on strategic initiatives instead of fixing problems. Another data point from ZK Research is that 90% of the time taken to fix a problem is spent identifying the source. Now that applications reside in the cloud and run on mobile devices, identifying the source of a problem has gotten harder. AI-WANs have the ability to spot even the smallest anomaly, even if it hasn't yet begun to impact business.
SD-WANs are fundamentally designed so that all routing rules are managed centrally by administrators and can be transmitted across a network. AI-WAN takes it a step further and enables administrators to anticipate problems before they happen through fault prediction. It may even adjust network glitches on its own before users are affected, thus improving network performance.
[][3]
A self-driving car knows the rules of the road—where the blind spots are, how to synch with traffic signals, and which safety measures to take—using AI software, real-time data from IoT sensors, cameras, and much more. Similarly, a self-driving network knows the higher-level rules and can prevent administrators from making mistakes, such as allowing applications in countries where certain actions are banned. 
Security is another concern. Everything from mobile devices to Internet of Things (IoT) to cloud computing is creating multiple new entry points and shifting resources to the network edge. This puts businesses at a security risk, as they struggle to respond to changes quickly.
Businesses can miss security gaps created by users, with hundreds of software-as-a-service (SaaS) apps being used at the same time without IT's knowledge. Older networking technologies cannot support SaaS and cloud services, while SD-WANs can. But simply deploying an SD-WAN is not enough to protect a network. Security shouldn't be an afterthought in an SD-WAN deployment, but part of it from the get-go.
Increasingly, vendors are bundling AI-based analytics with SD-WAN solutions to boost network security. Such solutions use AI to analyze how certain events impact the network, application performance, and security. Then, they create intelligent recommendations for any network changes, such as unauthorized use of SaaS apps.
Going back to the autonomous car analogy, AI-WANs are designed to keep roads clear and accident-free. They enable smarter networks that can adapt quickly to changing conditions and self-heal if necessary. With the growing demands of cloud computing and SaaS apps, intelligent networks are the future and forward-thinking businesses are already in the drivers seat.
### AI-WAN exists today and will explode in the future
AI-WAN may seem futuristic, but there are a number of vendors that are delivering it or in the process of bringing solutions to market. Managed service provider Masergy, for example, recently introduced [AIOps for SD-WAN][4] to deliver autonomous networking and has the most complete offering.
Open System, another managed service provider, [snapped up cloud-based Sqooba][5] to add AIOps to its strong network and security services. Keeping with the M&amp;A theme, VMware recently [acquired AIOps vendor Nyansa][6] and rolled it into its VeloCloud SD-WAN group. That move gives VMware similar capabilities to [Aruba Networks][7], which initially applied AI to WiFi troubleshooting but is now bringing it to its SD-Branch offering. [Cisco][8] is another networking vendor with an AIOps story, although it's trying to apply it network-wide, not just with the WAN. 
Over time, I expect every SD-WAN or SASE vendor to bring AIOps into the fold, shifting the focus away from connectivity to automated operations.
**Learn more about SD-WAN**
* [Top 10 underused SD-WAN features][9]
* [SD-WAN: The inside scoop from real-world deployments][10]
* [5 reasons to choose a managed SD-WAN and 5 reasons to think twice][11]
* [10 essential SD-WAN considerations][12]
* [SD-WAN: What is it and why youll use it one day][13]
* [SD-WAN deployment options: DIY vs. cloud managed][14]
* [How SD-WAN can improve your security strategy][15]
* [10 hot SD-WAN startups to watch][16]
* [How SD-WAN saves $1.2M over 5 years for a radiology firm][17]
* [SD-WAN creates new security challenges][18]
Join the Network World communities on [Facebook][19] and [LinkedIn][20] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3533437/the-coming-together-of-aiops-and-sd-wan.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[2]: https://www.networkworld.com/article/3338100/using-ai-to-improve-network-capacity-planning-what-you-need-to-know.html
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[4]: https://techcrunch.com/2020/03/19/nvidia-makes-its-gpu-powered-genome-sequencing-tool-available-free-to-those-studying-covid-19/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAMI6we4H-fTz2mf5g-l6IP27C1O-V-u6EiQuJ5QzVnlrrPU04iS0fhyrZo5U8q5rAk5I9uVW5PQYKHX8ziMdWrFBxhBP7f__JshmAGevyu4Z5zm98nDnC6nEdIekjVX4RPPmF9Q_PImcQ0opZy6JukS-DZA62tHI9R7D1Q2JAog7
[5]: https://open-systems.com/press-release/open-systems-acquires-sqooba
[6]: https://www.vmware.com/company/acquisitions/nyansa.html
[7]: https://blogs.arubanetworks.com/solutions/ai-doing-more-with-less-in-2020-and-beyond/
[8]: https://www.cisco.com/c/en/us/products/cloud-systems-management/crosswork-network-automation/service-centric-approach-to-aiops.html#~overview
[9]: https://www.networkworld.com/article/3518992/top-10-underused-sd-wan-features.html
[10]: https://www.networkworld.com/article/3316568/sd-wan/sd-wan-the-inside-scoop-from-real-world-deployments.html
[11]: https://www.networkworld.com/article/3431080/5-reasons-to-choose-a-managed-sd-wan-and-5-reasons-to-think-twice.html
[12]: https://www.networkworld.com/article/3355138/sd-wan-10-essential-considerations.html
[13]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[14]: https://www.networkworld.com/article/3243701/wide-area-networking/sd-wan-deployment-options-diy-vs-cloud-managed.html
[15]: https://www.networkworld.com/article/3336483/how-sd-wan-can-improve-your-security-strategy.html
[16]: https://www.networkworld.com/article/3284367/sd-wan/10-hot-sd-wan-startups-to-watch.html
[17]: https://www.networkworld.com/article/3255291/lan-wan/sd-wan-helps-radiology-firm-cut-costs-scale-bandwidth.html
[18]: https://www.networkworld.com/article/3336155/sd-wan-creates-new-security-challenges.html
[19]: https://www.facebook.com/NetworkWorld/
[20]: https://www.linkedin.com/company/network-world

View File

@ -1,119 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ROLLING UPDATE: The impact of COVID-19 on public networks and security)
[#]: via: (https://www.networkworld.com/article/3534037/rolling-update-the-impact-of-covid-19-on-public-networks-and-security.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
ROLLING UPDATE: The impact of COVID-19 on public networks and security
======
Network World updates the latest coronavirus-related networking news
Ig0rZh / Getty Images
_As the coronavirus spreads, public and private companies as well as government entities are requiring employees to work from home, putting unforeseen strain on all manner of networking technologies and causing bandwidth and security concerns.  What follows is a round-up of news and traffic updates that Network World will update as needed to help keep up with the ever-changing situation.  Check back frequently!_
**UPDATE 3.27**
Broadband watchers at [BroadbandNow][1] say users in most of the cities it analyzed are experiencing normal network conditions, suggesting that ISPs (and their networks) are holding up to the shifting demand. In a March 25 [post][2] the firm wrote: “Encouragingly, many of the areas hit hardest by the spread of the coronavirus are holding up to increased network demand. Cities like Los Angeles, Chicago, Brooklyn, and San Francisco have all experienced little or no disruption. New York City,  now the epicenter of the virus in the U.S., has seen a 24% dip out of its previous ten-week range. However, with a new median speed of nearly 52 Mbps, home connections still appear to be holding up overall.”
**[ Also see [What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
Other BroadbandNow findings included:
* Eighty eight (44%) of the 200 cities it analyzed experienced some degree of network degradation over the past week compared to the 10 weeks prior. However, only 27 (13.5%) cities experienced dips of 20% below range or greater.
* Seattle download speeds have continued to hold up over the past week, while New York Citys speeds have fallen out of range by 24%. Both cities are currently heavily affected by the coronavirus pandemic.
* Three cities Austin, Texas, Winston Salem, N.C., and Oxnard, Calif. have experienced significant degradations, falling out of their 10-week range by more than 40%.
Ciscos Talos threat-intelligence arm [wrote][5] on March 26 about the COVID security threat noting what it called three broad categories of attacks leveraging COVID with known advanced persistent threat participation in: [Malware and phishing campaigns][6] using COVID-themed lures; attacks against organizations that carry out research and other work related to COVID; and fraud and disinformation. From an enterprise security perspective, Talos recommended:
* Remote access: Do not expose Remote Desktop Protocol (RDP) to the internet. Use secure VPN connections with multi-factor authentication schemes. Network access control packages can be used to ensure that systems attempting to remotely connect to the corporate environment meet a minimum set of security standards such as anti-malware protection, patch levels, etc,. prior to granting them access to corporate resources. Continually identify and remediate access-policy violations.
* Identity Management: Protect critical and public-facing applications with multi-factor authentication and supporting corporate policies. Verify that remote-account and access-termination capabilities work as intended in a remote environment.
* Endpoint Control: Because many people may be working from home networks, endpoint visibility, protection, and mitigation is now more important than ever. Consider whether remediation and reimaging capabilities will work as intended in a remote environment. Encrypt devices where possible, and add this check to your NAC solution as a gate for connectivity. Another simple method of protecting endpoints is via DNS, such as with [Ciscos] Umbrella, by blocking the resolution of malicious domains before the host has a chance to make a connection.
In an [FAQ][7] about the impact of COVID-19 on fulfilling customer hardware orders, VMware stated: “Some VMware SD-WAN hardware appliances are on backorder as a result of supply chain issues. As a result, we are extending the option to update existing orders with different appliances where inventory is more readily available. Customers may contact a special email hotline with questions related to backordered appliances. Please send an email to [sd-wan-hotline@vmware.com][8] with your questions and include the order number, urgent quantities, and contact information. We will do our best to respond within 48 hours.”
Cisco said it has been analyzing traffic statistics with major carriers across Asia, Europe, and the Americas, and its data shows that typically, the most congested point in the network occurs at inter-provider peering points, Jonathan Davidson, senior vice president and general manager of Cisco's Mass-Scale Infrastructure Group wrote in a [blog][9] on March 26. “However, the traffic exchanged at these bottlenecks is only a part of the total internet traffic, meaning reports on traffic may be higher overall as private peering and local destinations also contribute to more traffic growth.”
[][10]
“Our analysis at these locations shows an increase in traffic of 10% to 33% over normal levels. In every country, traffic spiked with the decision to shut down non-essential businesses and keep people at home. Since then, traffic has remained stable or has experienced a slight uptick over the days that followed,” Davidson stated.
He said that traffic during peak hours from 6 p.m. and 10 p.m. has increased slightly, but is not the primary driver for the overall inrease. Busy hours have extended to 9 a.m. 10 p.m., although the new busy-hour (9 a.m. to 6 p.m.) traffic is still below the traditional peak hours. "Service providers are certainly paying attention to these changes, but they are not yet a dire concern, as most networks are designed for growth. Current capacities are utilized more over the course of the entire day,” he wrote.
Spanish multinational telecommunications company [Telefonica][11] said IP networks are experiencing traffic increases of close to 40% while mobile voice use is up about 50% and data is up 25%. In general, traffic through IP networks has experienced increases of nearly 40% while mobile use has increased by about 50% for voice and 25% for data. Likewise, traffic from instant-messaging tools such as Whatsapp has increased fivefold in recent days.
**UPDATE: 3.26**
* Week over week (ending March 23) [Ookla][12] says it has started to see a degradation of mobile and fixed-broadband performance worldwide. More detail on specific locations is available below. Comparing the week of March 16 to the week of March 9, mean download speed over mobile and fixed broadband decreased in Canada and the U.S. while both remained relatively flat in Mexico.
* What is the impact of the coronavirus on corporate network planning? Depends on how long the work-from-home mandate goes on really. Tom Nolle, president of CIMI Corp. [takes an interesting look at the situation][13] saying the shutdown “could eventually produce a major uptick for SD-WAN services, particularly in [managed service provider]    Businesses would be much more likely to embark on an SD-WAN VPN adventure that didnt involve purchase/licensing, favoring a service approach in general, and in particular one with a fairly short contract period.”
* Statistics from VPN provider [NordVPN][14] show the growth of VPN usage across the globe.  For example, the company said the US has experienced a 65.93% growth in the use of business VPNs since March 11. It reported that mass remote working has contributed towards a rise in desktop (94.09%) and mobile app (0.39%) usage among Americans. Globally, NordVPN teams has seen a 165% spike in the use of business VPNs and business VPN usage in Netherlands (240.49%), Canada (206.29%) and Austria (207.86%) has skyrocketed beyond 200%. Italy has had the most modest growth in business VPN usage at just 10.57%.
**UPDATE: 3. 25**:
* According to [Atlas VPN][15] user data, VPN usage in the US increased by 124% during the last two weeks. VPN usage in the country increased by 71% between March 16 and 22 alone. Atlas said it measured how much traffic traveled through its servers during that period compared to March 9 to 15. The data came from the company's 53,000 weekly users.
* Verizon [reports][16] that voice usage, long declining in the age of texting, chat and social media, is up 25% in the last week. The network report shows the primary driver is accessing conference calls. In addition, people are talking longer on mobile devices with wireless voice usage notching a 10% increase and calls lasting 15% longer. 
* AT&amp;T also [reported][17] increased calling, especially Wi-Fi calling, up 88% on March 22 versus a normal Sunday. It says that consumer home voice calls were up 74% more than an average Sunday; traffic from Netflix dipped after all-time highs on Friday and Saturday; and data traffic due to heavy video streaming between its network and peered networks tied record highs. AT&amp;T said it has deployed portable cell nodes to bolster coverage supporting FirstNet customers in Indiana, Connecticut, New Jersey, California and New York.
* Microsoft this week advised users of Office 365 it was throttling back some services:
* **OneNote: ** OneNote in Teams will be read-only for commercial tenants, excluding EDU. Users can go to OneNote for the web for editing. Download size and sync frequency of file attachments has been changed. You can find details on these and other OneNote related updates as <http://aka.ms/notesupdates>.
* **SharePoint:** We are rescheduling specific backend operations to regional evening and weekend business hours. Impacted capabilities include migration, DLP and delays in file management after uploading a new file, video or image. Reduced video resolution for playback videos.
* **Stream:** People timeline has been disabled for newly uploaded videos. Pre-existing videos will not be impacted. Meeting recording video resolution adjusted to 720p.
**RELATED COVID-19 NEWS:**
* Security vendor [Check Points Threat Intelligence][18] says that Since January 2020, there have been over 4,000 coronavirus-related domains registered globally. Out of these websites, 3% were found to be malicious and an additional 5% are suspicious. Coronavirus- related domains are 50% more likely to be malicious than other domains registered at the same period, and also higher than recent seasonal themes such as Valentines day.
* [Orange][19] an IT and communications **services** company aid that has increased its network capacity and upgraded its service platforms. These measures allow it to support the ongoing exponential increase in needs and uses. The number of users connecting to their company's network remotely has already increased by 700% among its customers. It has also doubled the capacity for simultaneous connections on its platforms. The use of remote collaboration solutions such as video conferencing has also risen massively with usage increasing by between 20% to 100%.
* Verizon said it has seen a 34% increase in VPN traffic from March 10 to 17. It has also seen a 75% increase in gaming traffic and web traffic increased by just under 20% in that time period according to Verizon.
* One week after the CDC declaration of the virus as a pandemic, data analytics and broadband vendor OpenVault wrote on March 19 that:
* Subscribers average usage during the 9 am-to-5 pm daypart has risen to 6.3 GB, 41.4% higher than the January figure of 4.4 GB. 
* During the same period, peak hours (6 pm11 pm) usage has risen 17.2% from 5.0 GB per subscriber in January to 5.87 GB in March. 
* Overall daily usage has grown from 12.19 GB to 15.46 GB, an increase of 26.8%.
* Based on the current rate of growth, OpenVault projected that consumption for March will reach nearly 400 GB per subscriber, an increase of almost 11% over the previous monthly record of 361 GB, established in January of this year. In addition, OpenVault projects a new coronavirus-influenced run rate of 460 GB per subscriber per month going forward.
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534037/rolling-update-the-impact-of-covid-19-on-public-networks-and-security.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://broadbandnow.com/
[2]: https://broadbandnow.com/report/internet-speed-analysis-march-15th-21st/
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://blog.talosintelligence.com/2020/03/covid-19-pandemic-threats.html
[6]: https://blog.talosintelligence.com/2020/02/coronavirus-themed-malware.html
[7]: https://www.vmware.com/company/news/updates/vmware-response-covid-19.html?mid=31381&eid=CVMW2000048242496
[8]: mailto:sd-wan-hotline@vmware.com
[9]: https://blogs.cisco.com/news/global-traffic-spikes-no-panic-at-the-cisco
[10]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[11]: https://www.telefonica.com/en/web/press-office/-/telefonica-announces-measures-related-to-covid-19
[12]: https://downdetector.com/?utm_campaign=Ookla%20Insights%20Blog%20Subscription&utm_source=hs_email&utm_medium=email&utm_content=85202785&_hsenc=p2ANqtz--Nj93d_eQyJpsqxrPJyNPtTiMBWBQU984psLyalw51K61e4d1WODareMF5NWFriHY2Uzw3WF7rF-2vSfH5cR53Jg3K5Q&_hsmi=85202785
[13]: https://blog.cimicorp.com/?p=4068
[14]: https://nordvpnteams.com/
[15]: https://atlasvpn.com/blog/lockdowns-and-panic-leads-to-a-124-surge-in-vpn-usage-in-the-us/
[16]: https://www.verizon.com/about/news/update-verizon-serve-customers-covid-19
[17]: https://about.att.com/pages/COVID-19.html
[18]: https://blog.checkpoint.com/2020/03/05/update-coronavirus-themed-domains-50-more-likely-to-be-malicious-than-other-domains/
[19]: https://www.orange.com/en/Press-Room/press-releases/press-releases-2020/Orange-is-mobilised-to-ensure-continuity-of-service-for-all-customers-in-France-and-around-the-world
[20]: https://www.facebook.com/NetworkWorld/
[21]: https://www.linkedin.com/company/network-world

View File

@ -1,108 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 tricks for developing a work from home schedule)
[#]: via: (https://opensource.com/article/20/3/work-from-home)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
6 tricks for developing a work from home schedule
======
Stay flexible, embrace change, and think of the transition to your home
office as an opportunity to create a healthier WFH experience.
![Working from home at a laptop][1]
When you start working from home, one of the first things you might have noticed is that there almost no outside influences on your schedule.
You probably have meetings—some over [team chat][2] and others over [video][3]— that you have to attend, but otherwise, there's nothing requiring you to do anything at any specific time. What you find out pretty quickly, though, is that there's an invisible influence that sneaks up on you: deadlines.
This lack of structure fosters procrastination, sometimes willful and other times aimless, followed by frantic sprints to get something done. Learning to mitigate that, along with all the distractions working from home might offer, is often the hardest part of your home-based work.
Here are a few ways to build in that structure for yourself do you don't end up feeling like you are falling behind.
### You have always had your own schedule
Everybody reacts to schedules differently. For some people, schedules offer guidance and comfort. For others, schedules are rigid and oppressive.
An office space generally provides focus. There might be plenty of distractions at the office, but you usually find a good stretch of time at some point during your day when you can get a big chunk of work done. Even in an office, though, each person actually keeps their own schedule. Your colleague might arrive early in the day, happily completing a day's work in an empty office before anyone else arrives and then spending the rest of the day doing menial tasks and socializing. Another colleague might arrive late and leave early, maximizing time spent in the office for actual work. Still others follow a steady pace throughout the day.
As you're settling into a WFH routine, take notes, either mentally or physically, on what seems to work well for you. If you like formalized systems, then build a schedule for yourself after a week or two, based on what you've been doing naturally. If there's something that isn't working for you, then drop it from your schedule.
Once you've found a good rhythm for yourself, you can manage your day with a [to-do list][4] like [todo.txt][5], or if you prefer sledgehammers (or you actually manage a department), you can try a [project management][6] application.
### Treat yourself as a new hire
For the first week or two at home, you may find it helpful to treat yourself (and your remote team) as a new hire. Instead of trying to mimic or impose the same schedule you kept at the office or at school, take time to monitor your own activity. It takes time for your body and mind to establish new and comfortable habits, and if you're the kind of person who wanders into a routine, then you need to give yourself time to discover what those habits are.
For instance, if you're consistently finding that you do your best work right after breakfast, then relegate menial tasks, like responding to email, reviewing tasks, or triaging bug reports, to the early morning, and make sure you have a big task set up for after breakfast. If you're having a hard time maintaining focus, then make time for morning tea so you can relax, reassess your workload, and plan your next step. Be curious about yourself like a manager would be for a brand new employee. Adjust accordingly.
### There is no guilt to being interrupted at home
Not everything on your schedule is under your control. If you have children too young for school, or if children are home from school, then their schedules take precedence.
It's a perceived benefit of working from home that your schedule is more flexible than at an office, but with that assumption, there can sometimes come a little to a lot of guilt. You might feel you're not working "enough" because you have to stop to wake up and feed children, or because you want to take a few play breaks every now and again.
That's mostly illusory, though, and here is how to think about it. If you swap out "children" for "manager," you might remember some times at the office where your "real" work was interrupted because you had to entertain upper management with sparkly presentations, or complete piles of paperwork for HR, or decompress by chatting with colleagues by the water cooler.
Your work from home is really no different. There are plenty of distractions, and they're only a problem if you fail to acknowledge and work around them.
### Making the choice to move beyond the 9-to-5
Unlike at the office, you're not forced into a rigid and dated 9-to-5 structure. If your day has to start at 7am, contains a lot of breaks, and doesn't end until 7pm, then that's alright. The essential part is to define that as your schedule for yourself (once you've established it as a schedule that works for you; remember to give yourself a few weeks to get a feel for what your schedule actually "wants" to be). Pick your start and end times, establish break times with your housemates, whether they're children, spouses, pets, or roommates/friends. Work when you're supposed to be working, and don't work when you're not scheduled to work. For most of us, colleagues don't need to know exactly when you are online as long as you're available when you're needed. In that case, your hours are your own. If the team is the kind that needs to know, run it by them.
### Don't forget time for yourself
It's important to at least establish times you stop thinking about work, no matter what. If you've got a very young child who's not given to staying on any schedule, then you might have to stay more flexible than most. Still, give yourself permission to actively be inactive. Do something you enjoy, even if it feels unproductive. It's normal to spend some idle time at the office to think or intentionally not think. Working from home deserves the same space. If anything, it's even more important as you establish working hours. The goal is to recognize when you're at home and when you're at work.
Finding stuff to do at work is relatively easy, but sometimes finding things to do to relax can be hard. If you have children, you can collaborate with them using Scratch for [drawing beautiful art][7], [programming video games][8], or even [robotics][9]. For the times you're not necessarily looking to collaborate and need your little one to do some exploration of their own, there are some [great open source alternatives to Minecraft][10], too. If you're interested in getting away from the screen, though, you might try some [Arduino][11] or [Seeeduino][12] gardening projects. Start with some programming, and end up outside in the garden!
### Choose to see change as exciting
Some say humans don't generally love change and that we're primarily creatures of habit, so moving from an office into a home office can be upsetting. And yet, invariably, we've shown that change can be exciting and even good. 
Use this thought experiment to remember how much you love change. Would you willingly revert to technology from even two years ago? How about five, or ten?
If we can embrace change in technology and be uninterested in reverting back, why not embrace change elsewhere?
When you work from home, you have the opportunity to be willing to explore change. If you find that a meeting time doesn't work for your schedule, mention it to your team. It may not work for them, either, and moving it to a different day could mean making room for a work sprint that knocks out a good chunk of your work for the week. If you find that a tool isn't working out for you, find a different one.
In fact, some of the most productive people are constantly evaluating new applications and new methods of working. They're constantly learning new things, developing new skills. They don't do this because they need the new skill, at least not by the letter of their work contract, but learning new things often reveals a surprising applicability to something they do need.
Need some examples? Tinkering with a Raspberry Pi for fun could result in a home server running [useful apps][13] to help you stay informed. [Learning][14] how to [write in Markdown][15] could introduce you to a new and more efficient workflow than you ever knew possible. Setting up a [computer with Linux][16] could reveal a new world of open source software that changes the way you approach problems, and improves the methods you use to solve them. The possibilities are limitless, but if you don't know where to start, you can browse through some of our articles or [cheat sheets][17] for ideas and tips.
### Working from home is a new opportunity
The habits we've built up around work aren't always healthy or efficient or fun. If you're starting to work from home, though, it's a chance to reinvent what work means for you. Keep the lines of [communication open][18] with your colleagues, embrace new ideas and the potential of change, and discover how you can be more productive by enjoying yourself and the place you call home (and **$HOME**).
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/work-from-home
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://opensource.com/alternatives/slack
[3]: https://opensource.com/alternatives/skype
[4]: https://opensource.com/article/19/9/to-do-list-managers-linux
[5]: https://opensource.com/article/20/1/open-source-to-do-list
[6]: https://opensource.com/article/18/2/agile-project-management-tools
[7]: https://opensource.com/article/19/9/drawing-vectors-scratch-3
[8]: https://opensource.com/article/18/4/designing-game-scratch-open-jam
[9]: https://opensource.com/education/13/8/student-programming-scratch-and-finch
[10]: https://opensource.com/alternatives/minecraft
[11]: https://opensource.com/article/17/3/arduino-garden-projects
[12]: https://opensource.com/article/19/12/seeeduino-nano-review
[13]: https://opensource.com/article/20/2/newsboat
[14]: https://opensource.com/article/19/9/introduction-markdown
[15]: https://opensource.com/article/18/11/markdown-editors
[16]: https://opensource.com/article/19/9/found-linux-video-gaming
[17]: https://opensource.com/article/20/1/cheat-sheets-guides
[18]: https://opensource.com/article/20/3/open-source-working-home

View File

@ -1,82 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Know the benefits of cloud-native networking for SASE)
[#]: via: (https://www.networkworld.com/article/3534720/know-the-benefits-of-cloud-native-networking-for-sase.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Know the benefits of cloud-native networking for SASE
======
Shlomo Kramer, CEO of Cato Networks discusses the benefits cloud-native networking brings to SASE
Metamorworks / Getty Images
Gartner has positioned secure access service edge (SASE) as the next wave of SD-WANs. While most industry people I talk to agree on the concept of security and networking being brought together, there is some debate surrounding cloud-native versus cloud-managed.
To get a better understanding of why cloud native matters, I sat down with Shlomo Kramer, CEO of Cato Networks, which designed its SASE service from the ground up for cloud delivery.
**Last year Gartner coined the term SASE, do you agree or disagree with their premise?**
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
Well I definitely agree.  The manifesto that Cato was founded on was the vision of converging network transport and network security and delivering it as a cloud service. The argument as to why you need SASE is topological in nature because traffic patterns have changed. Network traffic used to be inward bound because people sat at their desks, using corporate workstations and connecting to enterprise applications that resided in the company data centers.
That meant security was effectively a hard shell placed around a soft core.  Security was applied at the edge and protected all the physical locations behind it. Today, the traffic patterns have changed, and the security needs to be applied everywhere.  Applications are built in AWS as well as on-premises, workers are in the office, at home and in the hotel or anywhere.  So now corporate assets are everywhere so the hard shell no longer works.  Security needs to be different and be integrated everywhere so I absolutely agree with the concept of SASE.
**What are some other challenges with legacy technologies like MPLS and security appliances?**
**T**he problems with MPLS are well documented, so I wont spend too much time on this topic other than say every company we talk to wants to move off of MPLS because of high costs, long deployment times and a lack of agility.  MPLS does nothing for mobile users or cloud connectivity so organizations need to deploy VPN servers, cloud interconnects and other technologies to connect all of their company resources.
[][2]
On the security side, branch appliances have been an enormous problem that we as an industry accepted as the only possible solution.  Appliances need to be procured, deployed, maintained, upgraded and retired. All of which takes time and effort. They need to be integrated with one another, which requires more time and skills. Most appliances are managed from separate management consoles making operations complex and challenging.  Over time, more appliances are added, which raises the complexity level.  Also, when traffic jumps or too many features are turned on, upgrades are often required outside budget cycles.  Security professionals often lag behind when applying software patches because updating appliances is risky and needs to be carefully planned, leaving the company at risk.
I can go on, but _appliances_ as an architecture involve too many headaches and too much cost for companies looking to become leaner and more agile. And by appliances, I also mean VNFs and virtual appliances. It's the same story again. You still need to deploy, manage and scale them. Appliances are a poor choice not because of anyone's solution's limitations but because of the architecture itself.
**What benefit do cloud-native platforms provide?**
For Gur (Co-Founder of Cato, Gur Shatz) and myself, who came from the security and networking worlds, we were well acquainted with these problems. As we thought about what the right architecture would be moving forward, the cloud seemed like the obvious choice. We had already seen how cloud computing changed markets for data centers, servers, storage, and applications. We thought the cloud could do the same for security and networking. 
Like AWS for data centers and servers, we wanted to create a utility that would secure and network the complete enterprise, not just sites, but also remote users, cloud data centers, cloud applications, and third-party devices. We wanted enterprises to "tap" into this utility and instantly receive all the advanced security and networking services for the entire organization. It's why we called our SD-WAN device the "Cato Socket," like an electrical socket you plug into. This vision is in line with the SASE definition.
Instead of appliances, we move the “heavy lifting” involved in security and networking into a global, distributed, cloud-native software platform. By cloud-native software, we mean several things.  We actually wrote a [blog on this topic][3] that talks about the value of cloud-native.  There are many benefits but in particular, multi-tenancy is game-changing. This allows cloud providers to amortize costs across their customer base, allowing them to deliver offerings at a price point unmatched by one based on purchasing appliances for customers. 
This platform runs our single-pass, security and networking stack that performs all security inspections in parallel. A packet comes in, depacketized and decrypted by our software that then performs all the necessary security inspections in parallel before sending the packet on.  This is an incredible change from the way appliances work today. Today, each appliance must depacketize and decrypt packets, run a deep packet inspection (DPI) engine to understand the packet, apply the specific security inspections, and repacketize and re-encrypt for the next appliance to do the same.  
**Youve also stated that a global private network is necessary, why is that?**
As for the network, enterprises require predictable, low latency performance everywhere all the time. That's simply not possible with Internet routing today when broadband is used. While the problems of unpredictable latency across global routes or in under-developed Internet regions is well known even within Internet regions, we've seen specific routes have problems.  
How do you overcome latency AND the global connectivity costs of MPLS? Our answer was to leverage the massive build-out in global IP connectivity. By buying massive wholesale SLA-backed capacity across multiple IP backbones, and then dynamically selecting the best backbone at each hop across our network, we could deliver global, low-latency connections at a fraction of the cost of MPLS.   
**The SASE industry is currently filled with start-ups and smaller vendors.  Why are the big incumbents struggling to make this shift?**
** **
I think it should be evident by now, but existing appliance-based solutions simply can't be converted to become cloud-native. Re-engineering a platform for the cloud requires massive investments in R&amp;D, which will come at the expense of existing and very successful product lines, so beyond engineering, there is also an internal conflict to overcome.   
And that's why the "big incumbents," as you put it, are so threatened by SASE. We all recognize that SASE is the future, but to get that to that future, many of the established solution providers will need to disrupt their existing businesses. That's not easy to do, but what they can do is market. 
We, as an industry, are seeing vendors trying to capitalize on SASE by rebranding their solutions as SASE offerings. Some are appliances without cloud capabilities; others are security services without networking capabilities. For IT to tell the difference between a true SASE platform and a "fake" one, the litmus test is simple: If the center of gravity is in the appliances. If the offer lacks SD-WAN and if there is more than one management console. It's not SASE, and it's not the future. It's a repackaging of the past.  
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3534720/know-the-benefits-of-cloud-native-networking-for-sase.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[3]: https://www.catonetworks.com/blog/the-cloud-native-network-what-it-means-and-why-it-matters/
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

Some files were not shown because too many files have changed in this diff Show More