Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-08-25 04:39:58 +08:00
commit 2277f70883
17 changed files with 1823 additions and 376 deletions

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Semiconductor startup Cerebras Systems launches massive AI chip)
[#]: via: (https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Semiconductor startup Cerebras Systems launches massive AI chip
======
![Cerebras][1]
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel.
Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 inches, making it almost the size of an iPad and more than 50 times larger than a CPU or GPU. A typical CPU or GPU is about the size of a postage stamp.
[Now see how AI can boost data-center availability and efficiency.][2]
Cerebras wont sell the chips to ODMs due to the challenges of building and cooling such a massive chip. Instead, it will come as part of a complete server to be installed in data centers, which it says will start shipping in October.
The logic behind the design is that AI requires huge amounts of data just to run a test and current technology, even GPUs, are not fast or powerful enough. So Cerebras supersized the chip.
The numbers are just incredible. The companys WSE chip has 1.2 trillion transistors, 400,000 computing cores and 18 gigabytes of memory. A typical PC processor has about 2 billion transistors, four to six cores and a few megabytes of cache memory. Even a high-end GPU has 21 billion transistors and a few thousand cores.
The 400,000 cores on the WSE are connected via the Swarm communication fabric in a 2D mesh with 100 Pb/s of bandwidth. The WSE has 18 GB of on-chip memory, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth. This is 3000x more capacity and 10,000x greater bandwidth than the best Nvidia has to offer. More to the point it eliminates the need to move data in and out of memory to and from the CPU.
“A vast array of programmable cores provides cluster-scale compute on a single chip. High-speed memory close to each core ensures that cores are always occupied doing calculations. And by connecting everything on-die, communication is many thousands of times faster than what is possible with off-chip technologies like InfiniBand,” the company said in a [blog post][3] announcing the processor.
The cores are called Sparse Linear Algebra Cores, or SLA. They are optimized for the sparse linear algebra that is fundamental to neural network calculation. These cores are designed specifically for AI work. They are small and fast, contain no caches, and have eliminated other features and overheads that are needed in general purpose cores but play no useful role in a deep learning processor.
The chip is the brainchild of Andrew Feldman, who created the SeaMicro high density Atom-based server a decade ago as an alternative to overpowered Xeons for doing simple tasks like file and print or serving LAMP stacks. Feldman is a character, one of the more interesting people [Ive interviewed][4]. He definitely thinks outside the box.
Feldman sold SeaMicro to AMD for $334 million in 2012, which turned out to be a colossal waste of money on AMDs part, as the product shortly disappeared from the market. Since then hes raised $100 million in VC money.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/08/cerebras-wafer-scale-engine-100809084-large.jpg
[2]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
[3]: https://www.cerebras.net/hello-world/
[4]: https://www.serverwatch.com/news/article.php/3887471/SeaMicro-Launches-an-AtomPowered-Cloud-Computing-Server.htm
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world)
[#]: via: (https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world
======
VMware will spend $2.7 billion on cloud-application developer Pivotal and $2.1 billion for security vendor Carbon Black - details at next week's VMworld user conference
![Bigstock][1]
All things cloud are major topics of conversation at the VMworld user conference next week, ratcheded up a notch by VMware's $4.8 billion plans to acquire cloud development firm Pivotal and security provider Carbon Black.
VMware said during its quarterly financial call this week it would spend about $2.7 billion on Pivotal and its Cloud Foundry hybrid cloud development technology, and about $2.1 billion for the security technology of Carbon Black, which includes its Predictive Security Cloud and other endpoint-security software.  Both amounts represent the [enterprise value][2] of the deals the actual purchase prices will vary, experts said.
**[ Check out [What is hybrid cloud computing][3] and learn [what you need to know about multi-cloud][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
VMware has deep relationships with both companies. Carbon Black technology is part of [VMwares AppDefense][6] endpoint security. Pivotal has a deeper relationship in that VMware and Dell, VMwares parent company, [spun out Pivotal][7] in 2013.
“These acquisitions address two critical technology priorities of all businesses today  building modern, enterprise-grade applications and protecting enterprise workloads and clients. With these actions we meaningfully accelerate our subscription and SaaS offerings and expand our ability to enable our customers digital transformation,” said VMware CEO Pat Gelsinger, on the call.
With regards to the Pivotal acquisition Gelsinger said the time was right to own the whole compute stack. “We will now be uniquely positioned to help customers build, run and manage their cloud environment, and customers can go one place to get all of this technology,” Gelsinger said. “We embed the technology in our core VMware platform, and we will explain more about that at VMworld next week.”
On the Carbon Black buy Gelsinger said he expects the technology to be integrated across VMwares produce families such as NSX networking software and vSphere, VMware's flagship virtualization platform.
“Security is broken and fundamentally customers want a different answer in the security space. We think this move will be an opportunity for major disruption.”
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
Patric Morley, president and CEO of Carbon Black [wrote of the deal][9]: “VMware has a vision to create a modern security platform for any app, running on any cloud, delivered to any device essentially, to build security into the fabric of the compute stack. Carbon Blacks cloud-native platform, our ability to see and stop attackers by leveraging the power of our rich data and behavioral analytics, and our deep cybersecurity expertise are all truly differentiating.”
Both transactions are expected to close in the second half of VMwares fiscal year, which ends Jan. 31.
VMware has been on a massive buying spree this year that has included:
* Avi Networks for multi-cloud application delivery services.
* Bitfusion for hardware virtualization.
* Uhana, a company that is employing deep learning and real-time AI in carrier networks and applications, to automate network operations and optimize application experience.
* Veriflow, for network verification, assurance, and troubleshooting.
* Heptio for its Kubernetes technology.
Kubernetes integration will be a big topic at VMworld, Gelsinger hinted. “You will hear very specific announcements about how Heptio will be used. [And] we will be announcing major expansions of our Kubernetes and modern apps portfolio and help Pivotal complete that strategy. Together with Heptio and Pivotal, VMware will offer a comprehensive Kubernetes-based portfolio to build, run and manage modern applications on any cloud,” Gelsinger said.
“VMware has increased its Kubernetes-related investments over the past year with the acquisition of Heptio to become a top-three contributor to Kubernetes, and at VMworld we will describe a major R&D effort to evolve VMware vSphere into a native Kubernetes platform for VMs and containers.”
Other updates about where VMware vSphere and NSX-T are headed will also be hot topics.
Introduced in 2017, NSX-T Data Center software is targeted at organizations looking to support multivendor cloud-native applications, [bare-metal][10] workloads, [hypervisor][11] environments and the growing hybrid and multi-cloud worlds. In February the [company anointed NSX-T][12] the companys go-to platform for future software-defined cloud developments.
VMware is battling Cisco's Application Centric Infrastructure, Juniper's Contrail system and other platforms from vendors including Pluribus, Arista and Big Switch. How NSX-T evolves will be key to how well VMware competes.
The most recent news around vSphere was that new features of its Hybrid Cloud Extension application-mobility software enables non-vSphere as well as increased on-premises application workloads to migrate to a variety of specific cloud services. Introduced in 2017, [VMware HCX][13] lets vSphere customers tie on-premises systems and applications to cloud services.
The HCX announcement was part of VMwares continued evolution into cloud technologies. In July the company teamed with [Google][14] to natively support VMware workloads in its Google Cloud service, giving customers more options for deploying enterprise applications.
Further news about that relationship is likely at VMworld as well.
VMware also has a hybrid cloud partnership with [Microsofts Azure cloud service][15].  That package, called Azure VMware Solutions is built on VMware Cloud Foundation, which  is a packag of vSphere with NSX network-virtualization and VSAN software-defined storage-area-network platform. The company is expected to update developments with that platform as well.
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/08/hybridcloud-100808516-large.jpg
[2]: http://valuationacademy.com/what-is-the-enterprise-value-ev/
[3]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[4]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.networkworld.com/article/3359242/vmware-firewall-takes-aim-at-defending-apps-in-data-center-cloud.html
[7]: https://www.networkworld.com/article/2225739/what-is-pivotal--emc-and-vmware-want-it-to-be-your-platform-for-building-big-data-apps.html
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[9]: https://www.carbonblack.com/2019/08/22/the-next-chapter-in-our-story-vmware-carbon-black/
[10]: https://www.networkworld.com/article/3261113/why-a-bare-metal-cloud-provider-might-be-just-what-you-need.html?nsdr=true
[11]: https://www.networkworld.com/article/3243262/what-is-a-hypervisor.html?nsdr=true
[12]: https://www.networkworld.com/article/3346017/vmware-preps-milestone-nsx-release-for-enterprise-cloud-push.html
[13]: https://docs.vmware.com/en/VMware-HCX/services/rn/VMware-HCX-Release-Notes.html
[14]: https://www.networkworld.com/article/3428497/google-cloud-to-offer-vmware-data-center-tools-natively.html
[15]: https://www.networkworld.com/article/3113394/vmware-cloud-foundation-integrates-virtual-compute-network-and-storage-systems.html
[16]: https://www.facebook.com/NetworkWorld/
[17]: https://www.linkedin.com/company/network-world

View File

@ -1,66 +0,0 @@
translating by valoniakim
How allowing myself to be vulnerable made me a better leader
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
Conventional wisdom suggests that leadership is strong, bold, decisive. In my experience, leadership does feel like that some days.
Some days leadership feels more vulnerable. Doubts creep in: Am I making good decisions? Am I the right person for this job? Am I focusing on the most important things?
The trick with these moments is to talk about these moments. When we keep them secret, our insecurity only grows. Being an open leader means pushing our vulnerability into the spotlight. Only then can we seek comfort from others who have experienced similar moments.
To demonstrate how this works, I'll share a story.
### A nagging question
If you work in the tech industry, you'll note an obvious focus on creating [an organization that's inclusive][1]--a place for diversity to flourish. Long story short: I thought I was a "diversity hire," someone hired because of my gender, not my ability. Even after more than 15 years in the industry, with all of the focus on diversity in hiring, that possibility got under my skin. Along came the doubts: Was I hired because I was the best person for the job--or because I was a woman? After years of knowing I was hired because I was the best person, the fact that I was female suddenly seemed like it was more interesting to potential employers.
I rationalized that it didn't matter why I was hired; I knew I was the best person for the job and would prove it. I worked hard, delivered results, made mistakes, learned, and did everything an employer would want from an employee.
And yet the "diversity hire" question nagged. I couldn't shake it. I avoided the subject like the plague and realized that not talking about it was a signal that I had no choice but to deal with it. If I continued to avoid the subject, it was going to affect my work. And that's the last thing I wanted.
### Speaking up
Talking about diversity and inclusion can be awkward. So many factors enter into the decision to open up:
* Can we trust our co-workers with a vulnerable moment?
* Can a leader of a team be too vulnerable?
* What if I overstep? Do I damage my career?
In my case, I ended up at a lunch Q&A session with an executive who's a leader in many areas of the organization--especially candid conversations. A coworker asked the "Was I a diversity hire?" question. He stopped and spent a significant amount of time talking about this question to a room full of women. I'm not going to recount the entire discussion here; I will share the most salient point: If you know you're qualified for the job and you know the interview went well, don't doubt the outcome. Anyone who questions whether you're a diversity hire has their own questions to answer. You don't have to go on their journey.
Mic drop.
I wish I could say that I stopped thinking about this topic. I didn't. The question lingered: What if I am the exception to the rule? What if I was the one diversity hire? I realized that I couldn't avoid the nagging question.
Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted.
A few weeks later I had a one-on-one with the executive. At the end of conversation, I mentioned that, as a woman, I appreciate his candid conversations about diversity and inclusion. It's easier to talk about these topics when a recognized leader is willing to have the conversation. I also returned to the "Was I a diversity hire? question. He didn't hesitate: We talked. At the end of the conversation, I realized that I was hungry to talk about these things that require bravery; I only needed a nudge and someone who cared enough to talk and listen.
Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted. Feeling physically lighter, I started to have constructive conversations around the questions of implicit bias, what we can do to be inclusive, and what diversity looks like. As I've learned, every person has a different answer when I ask the diversity question. I wouldn't have gotten to have all of these amazing conversations if I'd stayed stuck with my secret.
I had courage to talk, and I hope you will too.
Let's talk about these things that hold us back in terms of our ability to lead so we can be more open leaders in every sense of the phrase. Has allowing yourself to be vulnerable made you a better leader?
### About The Author
Angela Robertson;Angela Robertson Works As A Senior Manager At Microsoft. She Works With An Amazing Team Of People Passionate About Community Contributions;Engaged In Open Organizations. Before Joining Microsoft;Angela Worked At Red Hat
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
作者:[Angela Robertson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/arobertson98
[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity

View File

@ -1,162 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (lujun9972)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
[#]: author: (redhat https://www.redhat.com)
Command Line Heroes: Season 1: OS Wars(Part2 Rise of Linux)
======
Saron Yitbarek: Is this thing on? Cue the epic Star Wars crawl, and, action.
Voice Actor: [00:00:30] Episode Two: Rise of Linux ® . The empire of Microsoft controls 90 % of desktop users . C omplete standardization of operating systems seems assured. However, the advent of the internet swerves the focus of the war from the desktop toward enterprise, where all businesses scramble to claim a server of their own. Meanwhile, an unlikely hero arises from amongst the band of open source rebels . Linus Torvalds, head strong, bespectacled, releases his Linux system free of charge. Microsoft reels — and regroups.
Saron Yitbarek: [00:01:00] Oh, the nerd in me just loves that. So, where were we? Last time, Apple and Microsoft were trading blows, trying to dominate in a war over desktop users. By the end of e pisode o ne, we saw Microsoft claiming most of the prize. Soon, the entire landscape went through a seismic upheaval. That's all because of the rise of the internet and the army of developers that rose with it. The internet moves the battlefield from PC users in their home offices to giant business clients with hundreds of servers.
[00:01:30] This is a huge resource shift. Not only does every company out there wanting to remain relevant suddenly have to pay for server space and get a website built — they also have to integrate software to track resources, monitor databases, et cetera, et cetera. You're going to need a lot of developers to help you with that. At least, back then you did.
In p art t wo of the OS wars, we'll see how that enormous shift in priorities , and the work of a few open source rebels like Linus Torvalds and Richard Stallman , managed to strike fear in the heart of Microsoft, and an entire software industry.
[00:02:00] I'm Saron Yitbarek and you're listening to Command Line Heroes, an original podcast from Red Hat. In each episode, we're bringing you stories about the people who transform technology from the command line up.
[00:02:30] Okay. Imagine for a second that you're Microsoft in 1991. You're feeling pretty good, right? Pretty confident. Assured global domination feels nice. You've mastered the art of partnering with other businesses, but you're still pretty much cutting out the developers, programmers, and sys admins that are the real foot soldiers out there. There is this Finnish geek named Linus Torvalds. He and his team of open source programmers are starting to put out versions of Linux, this OS kernel that they're duct taping together.
[00:03:00] If you're Microsoft, frankly, you're not too concerned about Linux or even about open source in general, but eventually, the sheer size of Linux gets so big that it becomes impossible for Microsoft not to notice. The first version comes out in 1991 and it's got maybe 10,000 lines of code. A decade later, there will be three million lines of code. In case you're wondering, today it's at 20 million.
[00:03:30] For a moment, let's stay in the early 90s. Linux hasn't yet become the behemoth we know now. It's just this strangely viral OS that's creeping across the planet, and the geeks and hackers of the world are falling in love with it. I was too young in those early days, but I sort of wish I'd been there. At that time, discovering Linux was like gaining access to a secret society. Programmers would share the Linux CD set with friends the same way other people would share mixtapes of underground music.
Developer Tristram Oaten [00:03:40] tells the story of how he first encountered Linux when he was 16 years old.
Tristram Oaten: [00:04:00] We went on a scuba diving holiday, my family and I, to Hurghada, which is on the Red Sea. Beautiful place, highly recommend it. The first day, I drank the tap water. Probably, my mom told me not to. I was really sick the whole week — didn't leave the hotel room. All I had with me was a laptop with a fresh install of Slackware Linux, this thing that I'd heard about and was giving it a try. There were no extra apps, just what came on the eight CDs. By necessity, all I had to do this whole week was to get to grips with this alien system. I read man pages, played around with the terminal. I remember not knowing the difference between a single dot, meaning the current directory, and two dots, meaning the previous directory.
[00:04:30] I had no clue. I must have made so many mistakes, but slowly, over the course of this forcible solitude, I broke through this barrier and started to understand and figure out what this command line thing was all about. By the end of the holiday, I hadn't seen the pyramids, the Nile, or any Egyptian sites, but I had unlocked one of the wonders of the modern world. I'd unlocked Linux, and the rest is history.
Saron Yitbarek: You can hear some variation on that story from a lot of people. Getting access to that Linux command line was a transformative experience.
David Cantrell: This thing gave me the source code. I was like, "That's amazing."
Saron Yitbarek: We're at a 2017 Linux developers conference called Flock to Fedora.
David Cantrell: ... very appealing. I felt like I had more control over the system and it just drew me in more and more. From there, I guess, after my first Linux kernel compile in 1995, I was hooked, so, yeah.
Saron Yitbarek: Developers David Cantrell and Joe Brockmire.
Joe Brockmeier: I was going through the cheap software and found a four - CD set of Slackware Linux. It sounded really exciting and interesting so I took it home, installed it on a second computer, started playing with it, and really got excited about two things. One was, I was excited not to be running Windows, and I was excited by the open source nature of Linux.
Saron Yitbarek: [00:06:00] That access to the command line was, in some ways, always there. Decades before open source really took off, there was always a desire to have complete control, at least among developers. Go way back to a time before the OS wars, before Apple and Microsoft were fighting over their GUIs. There were command line heroes then, too. Professor Paul Jones is the director of the online library ibiblio.org. He worked as a developer during those early days.
Paul Jones: [00:07:00] The internet, by its nature, at that time, was less client server, totally, and more peer to peer. We're talking about, really, some sort of VAX to VAX, some sort of scientific workstation, the scientific workstation. That doesn't mean that client and server relationships and applications weren't there, but it does mean that the original design was to think of how to do peer - to - peer things, the opposite of what IBM had been doing, in which they had dumb terminals that had only enough intelligence to manage the user interface, but not enough intelligence to actually let you do anything in the terminal that would expose anything to it.
Saron Yitbarek: As popular as GUI was becoming among casual users, there was always a pull in the opposite direction for the engineers and developers. Before Linux in the 1970s and 80s, that resistance was there, with EMAX and GNU . W ith Stallman's free software foundation, certain folks were always begging for access to the command line, but it was Linux in the 1990s that delivered like no other.
[00:07:30] The early lovers of Linux and other open source software were pioneers. I'm standing on their shoulders. We all are.
You're listening to Command Line Heroes, an original podcast from Red Hat. This is part two of the OS wars: Rise of Linux.
Steven Vaughan-Nichols: By 1998, things have changed.
Saron Yitbarek: Steven Vaughan-Nichols is a contributing editor at zdnet.com, and he's been writing for decades about the business side of technology. He describes how Linux slowly became more and more popular until the number of volunteer contributors was way larger than the number of Microsoft developers working on Windows. Linux never really went after Microsoft's desktop customers, though, and maybe that's why Microsoft ignored them at first. Where Linux did shine was in the server room. When businesses went online, each one required a unique programming solution for their needs.
[00:08:30] Windows NT comes out in 1993 and it's competing with other server operating systems, but lots of developers are thinking, "Why am I going to buy an AIX box or a large windows box when I could set up a cheap Linux-based system with Apache?" Point is, Linux code started seeping into just about everything online.
Steven Vaughan-Nichols: [00:09:00] Microsoft realizes that Linux, quite to their surprise, is actually beginning to get some of the business, not so much on the desktop, but on business servers. As a result of that, they start a campaign, what we like to call FUD — fear, uncertainty and doubt — saying, "Oh this Linux stuff, it's really not that good. It's not very reliable. You can't trust it with anything."
Saron Yitbarek: [00:09:30] That soft propaganda style attack goes on for a while. Microsoft wasn't the only one getting nervous about Linux, either. It was really a whole industry versus that weird new guy. For example, anyone with a stake in UNIX was likely to see Linux as a usurper. Famously, the SCO Group, which had produced a version of UNIX, waged lawsuits for over a decade to try and stop the spread of Linux. SCO ultimately failed and went bankrupt. Meanwhile, Microsoft kept searching for their opening. They were a company that needed to make a move. It just wasn't clear what that move was going to be.
Steven Vaughan-Nichols: [00:10:30] What will make Microsoft really concerned about it is the next year, in 2000, IBM will announce that they will invest a billion dollars in Linux in 2001. Now, IBM is not really in the PC business anymore. They're not out yet, but they're going in that direction, but what they are doing is they see Linux as being the future of servers and mainframe computers, which, spoiler alert, IBM was correct. Linux is going to dominate the server world.
Saron Yitbarek: This was no longer just about a bunch of hackers loving their Jedi-like control of the command line. This was about the money side working in Linux's favor in a major way. John "Mad Dog" Hall, the executive director of Linux International, has a story that explains why that was. We reached him by phone.
John Hall: [00:11:30] A friend of mine named Dirk Holden [00:10:56] was a German systems administrator at Deutsche Bank in Germany, and he also worked in the graphics projects for the early days of the X Windows system for PCs. I visited him one day at the bank, and I said, "Dirk, you have 3,000 servers here at the bank and you use Linux. Why don't you use Microsoft NT?" He looked at me and he said, "Yes, I have 3,000 servers , and if I used Microsoft Windows NT, I would need 2,999 systems administrators." He says, "With Linux, I only need four." That was the perfect answer.
Saron Yitbarek: [00:12:00] The thing programmers are getting obsessed with also happens to be deeply attractive to big business. Some businesses were wary. The FUD was having an effect. They heard open source and thought, "Open. That doesn't sound solid. It's going to be chaotic, full of bugs," but as that bank manager pointed out, money has a funny way of convincing people to get over their hangups. Even little businesses, all of which needed websites, were coming on board. The cost of working with a cheap Linux system over some expensive proprietary option, there was really no comparison. If you were a shop hiring a pro to build your website, you wanted them to use Linux.
[00:12:30] Fast forward a few years. Linux runs everybody's website. Linux has conquered the server world, and then, along comes the smartphone. Apple and their iPhones take a sizeable share of the market, of course, and Microsoft hoped to get in on that, except, surprise, Linux was there, too, ready and raring to go.
Author and journalist James Allworth.
James Allworth: [00:13:00] There was certainly room for a second player, and that could well have been Microsoft, but for the fact of Android, which was fundamentally based on Linux, and because Android, famously acquired by Google, and now running a majority of the world's smartphones, Google built it on top of that. They were able to start with a very sophisticated operating system and a cost basis of zero. They managed to pull it off, and it ended up locking Microsoft out of the next generation of devices, by and large, at least from an operating system perspective.
Saron Yitbarek: [00:13:30] The ground was breaking up, big time, and Microsoft was in danger of falling into the cracks. John Gossman is the chief architect on the Azure team at Microsoft. He remembers the confusion that gripped the company at that time.
John Gossman: [00:14:00] Like a lot of companies, Microsoft was very concerned about IP pollution. They thought that if you let developers use open source they would likely just copy and paste bits of code into some product and then some sort of a viral license might take effect that ... They were also very confused, I think, it was just culturally, a lot of companies, Microsoft included, were confused on the difference between what open source development meant and what the business model was. There was this idea that open source meant that all your software was free and people were never going to pay anything.
Saron Yitbarek: [00:14:30] Anybody invested in the old, proprietary model of software is going to feel threatened by what's happening here. When you threaten an enormous company like Microsoft, yeah, you can bet they're going to react. It makes sense they were pushing all that FUD — fear, uncertainty and doubt. At the time, an “ us versus them ” attitude was pretty much how business worked. If they'd been any other company, though, they might have kept that old grudge, that old thinking, but then, in 2013, everything changes.
[00:15:00] Microsoft's cloud computing service, Azure, goes online and, shockingly, it offers Linux virtual machines from day one. Steve Ballmer, the CEO who called Linux a cancer, he's out, and a new forward - thinking CEO, Satya Nadella, has been brought in.
John Gossman: Satya has a different attitude. He's another generation. He's a generation younger than Paul and Bill and Steve were, and had a different perspective on open source.
Saron Yitbarek: John Gossman, again, from Microsoft's Azure team.
John Gossman: [00:16:00] We added Linux support into Azure about four years ago, and that was for very pragmatic reasons. If you go to any enterprise customer, you will find that they are not trying to decide whether to use Windows or to use Linux or to use .net or to use Java TM . They made all those decisions a long time ago — about 15 years or so ago, there was some of this argument. Now, every company that I have ever seen has a mix of Linux and Java and Windows and .net and SQL Server and Oracle and MySQL — proprietary source code - based products and open source code products.
If you're going to operate a cloud and you're going to allow and enable those companies to run their businesses on the cloud, you simply cannot tell them, "You can use this software but you can't use this software."
Saron Yitbarek: [00:16:30] That's exactly the philosophy that Satya Nadella adopted. In the fall of 2014, he gets up on stage and he wants to get across one big, fat point. Microsoft loves Linux. He goes on to say that 20 % of Azure is already Linux and that Microsoft will always have first - class support for Linux distros. There's not even a whiff of that old antagonism toward open source.
To drive the point home, there's literally a giant sign behind them that reads, "Microsoft hearts Linux." Aww. For some of us, that turnaround was a bit of a shock, but really, it shouldn't have been. Here's Steven Levy, a tech journalist and author.
Steven Levy: [00:17:30] When you're playing a football game and the turf becomes really slick, maybe you switch to a different kind of footwear in order to play on that turf. That's what they were doing. They can't deny reality and there are smart people there so they had to realize that this is the way the world is and put aside what they said earlier, even though they might be a little embarrassed at their earlier statements, but it would be crazy to let their statements about how horrible open source was earlier, affect their smart decisions now.
Saron Yitbarek: [00:18:00] Microsoft swallowed its pride in a big way. You might remember that Apple, after years of splendid isolation, finally shifted toward a partnership with Microsoft. Now it was Microsoft's turn to do a 180. After years of battling the open source approach, they were reinventing themselves. It was change or perish. Steven Vaughan-Nichols.
Steven Vaughan-Nichols: [00:18:30] Even a company the size of Microsoft simply can't compete with the thousands of open source developers working on all these other major projects , including Linux. They were very loath e to do so for a long time. The former Microsoft CEO, Steve Ballmer, hated Linux with a passion. Because of its GPL license, it was a cancer, but once Ballmer was finally shown the door, the new Microsoft leadership said, "This is like trying to order the tide to stop coming in. The tide is going to keep coming in. We should work with Linux, not against it."
Saron Yitbarek: [00:19:00] Really, one of the big wins in the history of online tech is the way Microsoft was able to make this pivot, when they finally decided to. Of course, older, hardcore Linux supporters were pretty skeptical when Microsoft showed up at the open source table. They weren't sure if they could embrace these guys, but, as Vaughan-Nichols points out, today's Microsoft simply is not your mom and dad's Microsoft.
Steven Vaughan-Nichols : [00:19:30] Microsoft 2017 is not Steve Ballmer's Microsoft, nor is it Bill Gates' Microsoft. It's an entirely different company with a very different approach and, again, once you start using open source, it's not like you can really pull back. Open source has devoured the entire technology world. People who have never heard of Linux as such, don't know it, but every time they're on Facebook , they're running Linux. Every time you do a Google search , you're running Linux.
[00:20:00] Every time you do anything with your Android phone , you're running Linux again. It literally is everywhere, and Microsoft can't stop that, and thinking that Microsoft can somehow take it all over, I think is naïve.
Saron Yitbarek: [00:20:30] Open source supporters might have been worrying about Microsoft coming in like a wolf in the flock, but the truth is, the very nature of open source software protects it from total domination. No single company can own Linux and control it in any specific way. Greg Kroah-Hartman is a fellow at the Linux Foundation.
Greg Kroah-Hartman: Every company and every individual contributes to Linux in a selfish manner. They're doing so because they want to solve a problem that they have, be it hardware isn't working , or they want to add a new feature to do something else , or want to take it in a direction that they'll build that they can use for their product. That's great, because then everybody benefits from that because they're releasing the code back, so that everybody can use it. It's because of that selfishness that all companies and all people have, everybody benefits.
Saron Yitbarek: [00:21:30] Microsoft has realized that in the coming cloud wars, fighting Linux would be like going to war with, well, a cloud. Linux and open source aren't the enemy, they're the atmosphere. Today, Microsoft has joined the Linux Foundation as a platinum member. They became the number one contributor to open source on GitHub. In September, 2017, they even joined the Open Source Initiative. These days, Microsoft releases a lot of its code under open licenses. Microsoft's John Gossman describes what happened when they open sourced .net. At first, they didn't really think they'd get much back.
John Gossman: [00:22:00] We didn't count on contributions from the community, and yet, three years in, over 50 per cent of the contributions to the .net framework libraries, now, are coming from outside of Microsoft. This includes big pieces of code. Samsung has contributed ARM support to .net. Intel and ARM and a couple other chip people have contributed code generation specific for their processors to the .net framework, as well as a surprising number of fixes, performance improvements , and stuff — from just individual contributors to the community.
Saron Yitbarek: Up until a few years ago, the Microsoft we have today, this open Microsoft, would have been unthinkable.
[00:23:00] I'm Saron Yitbarek, and this is Command Line Heroes. Okay, we've seen titanic battles for the love of millions of desktop users. We've seen open source software creep up behind the proprietary titans, and nab huge market share. We've seen fleets of command line heroes transform the programming landscape into the one handed down to people like me and you. Today, big business is absorbing open source software, and through it all, everybody is still borrowing from everybody.
[00:23:30] In the tech wild west, it's always been that way. Apple gets inspired by Xerox, Microsoft gets inspired by Apple, Linux gets inspired by UNIX. Evolve, borrow, constantly grow. In David and Goliath terms, open source software is no longer a David, but, you know what? It's not even Goliath, either. Open source has transcended. It's become the battlefield that others fight on. As the open source approach becomes inevitable, new wars, wars that are fought in the cloud, wars that are fought on the open source battlefield, are ramping up.
Here's author Steven Levy.
Steven Levy: [00:24:00] Basically, right now, we have four or five companies, if you count Microsoft, that in various ways are fighting to be the platform for all we do, for artificial intelligence, say. You see wars between intelligent assistants, and guess what? Apple has an intelligent assistant, Siri. Microsoft has one, Cortana. Google has the Google Assistant. Samsung has an intelligent assistant. Amazon has one, Alexa. We see these battles shifting to different areas, there. Maybe, you could say, the hottest one is would be, whose AI platform is going to control all the stuff in our lives there, and those five companies are all competing for that.
Saron Yitbarek: If you're looking for another rebel that's going to sneak up behind Facebook or Google or Amazon and blindside them the way Linux blindsided Microsoft, you might be looking a long time, because as author James Allworth points out, being a true rebel is only getting harder and harder.
James Allworth: [00:25:30] Scale's always been an advantage but the nature of scale advantages are almost ... Whereas, I think previously they were more linear in nature, now it's more exponential in nature, and so, once you start to get out in front with something like this , it becomes harder and harder for a new player to come in and catch up. I think this is true of the internet era in general, whether it's scale like that or the importance and advantages that data bestow on an organization in terms of its ability to compete. Once you get out in front, you attract more customers, and then that gives you more data and that enables you to do an even better job, and then, why on earth would you want to go with the number two player, because they're so far behind? I think it's going to be no different in cloud.
Saron Yitbarek: [00:26:00] This story began with singular heroes like Steve Jobs and Bill Gates, but the progress of technology has taken on a crowdsourced, organic feel. I think it's telling that our open source hero, Linus Torvalds, didn't even have a real plan when he first invented the Linux kernel. He was a brilliant , young developer for sure, but he was also like a single drop of water at the very front of a tidal wave. The revolution was inevitable. It's been estimated that for a proprietary company to create a Linux distribution in their old - fashioned , proprietary way, it would cost them well over $ 10 billion. That points to the power of open source.
[00:26:30] In the end, it's not something that a proprietary model is going to compete with. Successful companies have to remain open. That's the big, ultimate lesson in all this. Something else to keep in mind: W hen we're wired together, our capacity to grow and build on what we've already accomplished becomes limitless. As big as these companies get, we don't have to sit around waiting for them to give us something better. Think about the new developer who learns to code for the sheer joy of creating, the mom who decides that if nobody's going to build what she needs, then she'll build it herself.
Wherever tomorrow's great programmers come from, they're always going to have the capacity to build the next big thing, so long as there's access to the command line.
[00:27:30] That's it for our two - part tale on the OS wars that shaped our digital lives. The struggle for dominance moved from the desktop to the server room, and ultimately into the cloud. Old enemies became unlikely allies, and a crowdsourced future left everything open . Listen, I know, there are a hundred other heroes we didn't have space for in this history trip, so drop us a line. Share your story. Redhat.com/commandlineheroes. I'm listening.
We're spending the rest of the season learning what today's heroes are creating, and what battles they're going through to bring their creations to life. Come back for more tales — from the epic front lines of programming. We drop a new episode every two weeks. In a couple weeks' time, we bring you episode three: the Agile Revolution.
[00:28:00] Command Line Heroes is an original podcast from Red Hat. To get new episodes delivered automatically for free, make sure to subscribe to the show. Just search for “ Command Line Heroes ” in Apple p odcast s , Spotify, Google Play, and pretty much everywhere else you can find podcasts. Then, hit “ subscribe ” so you will be the first to know when new episodes are available.
I'm Saron Yitbarek. Thanks for listening. Keep on coding.
--------------------------------------------------------------------------------
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
作者:[redhat][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.redhat.com
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,538 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (I Used The Web For A Day On A 50 MB Budget — Smashing Magazine)
[#]: via: (https://www.smashingmagazine.com/2019/07/web-on-50mb-budget/)
[#]: author: (Chris Ashton https://www.smashingmagazine.com/author/chrisbashton)
I Used The Web For A Day On A 50 MB Budget
======
Data can be prohibitively expensive, especially in developing countries. Chris Ashton puts himself in the shoes of someone on a tight data budget and offers practical tips for reducing our websites data footprint.
This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs.
Last time, I [navigated the web for a day using Internet Explorer 8][7]. This time, I browsed the web for a day on a 50 MB budget.
### Why 50 MB?
Many of us are lucky enough to be on mobile plans which allow several gigabytes of data transfer per month. Failing that, we are usually able to connect to home or public WiFi networks that are on fast broadband connections and have effectively unlimited data.
But there are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.
> People often buy data packages of just tens of megabytes at a time, making a gigabyte a relatively large and therefore expensive amount of data to buy.
> — Dan Howdle, consumer telecoms analyst at Cable.co.uk
Just how expensive are we talking?
#### The Cost Of Mobile Data
A 2018 [study by cable.co.uk][8] found that Zimbabwe was the most expensive country in the world for mobile data, where 1 GB cost an average of $75.20, ranging from $12.50 to $138.46. The enormous range in price is due to smaller amounts of data being very expensive, getting proportionally cheaper the bigger the data plan you commit to. You can read the [study methodology][9] for more information.
Zimbabwe is by no means a one-off. Equatorial Guinea, Saint Helena and the Falkland Islands are next in line, with 1 GB of data costing $65.83, $55.47 and $47.39 respectively. These countries generally have a combination of poor technical infrastructure and low adoption, meaning data is both costly to deliver and doesnt have the economy of scale to drive costs down.
Data is expensive in parts of Europe too. A gigabyte of data in Greece will set you back $32.71; in Switzerland, $20.22. For comparison, the same amount of data costs $6.66 in the UK, or $12.37 in the USA. On the other end of the scale, India is the cheapest place in the world for data, at an average cost of $0.26. Kyrgyzstan, Kazakhstan and Ukraine follow at $0.27, $0.49 and $0.51 per GB respectively.
The speed of mobile networks, too, varies considerably between countries. Perhaps surprisingly, [users experience faster speeds over a mobile network than WiFi][10] in at least 30 countries worldwide, including Australia and France. South Korea has the [fastest mobile download speed][11], averaging 52.4 Mbps, but Iraq has the slowest, averaging 1.6 Mbps download and 0.7 Mbps upload. The USA ranks 40th in the world for mobile download speeds, at around 34 Mbps, and is [at risk of falling further behind][12] as the world moves towards 5G.
As for mobile network connection type, 84.7% of user connections in the UK are on 4G, compared to 93% in the USA, and 97.5% in South Korea. This compares with less than 50% in Uzbekistan and less than 60% in Algeria, Ecuador, Nepal and Iraq.
#### The Cost Of Broadband Data
Meanwhile, a [study of the cost of broadband in 2018][13] shows that a broadband connection in Niger costs $263 per megabit per month. This metric is a little difficult to comprehend, so heres an example: if the average cost of broadband packages in a country is $22, and the average download speed offered by the packages is 10 Mbps, then the cost per megabit per month would be $2.20.
Its an interesting metric, and one that acknowledges that broadband speed is as important a factor as the data cap. A cost of $263 suggests a combination of extremely slow and extremely expensive broadband. For reference, the metric is $1.19 in the UK and $1.26 in the USA.
Whats perhaps easier to comprehend is the average cost of a broadband package. Note that this study was looking for the cheapest broadband packages on offer, ignoring whether or not these packages had a data cap, so provides a useful ballpark figure rather than the cost of data per se.
On package cost alone, Mauritania has the most expensive broadband in the world, at an average of $768.16 (a range of $307.26 to $1,368.72). This enormous cost includes building physical lines to the property, since few already exist in Mauritania. At 0.7 Mbps, Mauritania also has one of the slowest broadband networks in the world.
[Taiwan has the fastest broadband in the world][14], at a mean speed of 85 Mbps. Yemen has the slowest, at 0.38 Mbps. But even countries with good established broadband infrastructure have so-called not-spots. The United Kingdom is ranked 34th out of 207 countries for broadband speed, but in July 2019 there was [still a school in the UK without broadband][15].
The average cost of a broadband package in the UK is $39.58, and in the USA is $67.69. The cheapest average in the world is Ukraines, at just $5, although the cheapest broadband deal of them all was found in Kyrgystan ($1.27 — against the country average of $108.22).
Zimbabwe was the most costly country for mobile data, and the statistics arent much better for its broadband, with an average cost of $128.71 and a per megabit per month cost of $6.89.
#### Absolute Cost vs Cost In Real Terms
All of the costs outlined so far are the absolute costs in USD, based on the exchange rates at the time of the study. These costs have [not been accounted for cost of living][16], meaning that for many countries the cost is actually far higher in real terms.
Im going to limit my browsing today to 50 MB, which in Zimbabwe would cost around $3.67 on a mobile data tariff. That may not sound like much, but teachers in Zimbabwe were striking this year because their [salaries had fallen to just $2.50 a day][17].
For comparison, $3.67 is around half the [$7.25 minimum wage in the USA][18]. As a Zimbabwean, Id have to work for around a day and a half to earn the money to buy this 50MB data, compared to just half an hour in the USA. Its not easy to compare cost of living between countries, but on wages alone the $3.67 cost of 50 MB of data in Zimbabwe would feel like $52 to an American on minimum wage.
### Setting Up The Experiment
I launched Chrome and opened the dev tools, where I throttled the network to a slow 3G connection. I wanted to simulate a slow connection like those experienced by users in Uzbekistan, to see what kind of experience websites would give me. I also throttled my CPU to simulate being on a lower end device.
[![][19]][20]I opted to throttle my network to Slow 3G and my CPU to 6x slowdown. ([Large preview][20])
I installed [ModHeader][21] and set the [Save-Data header][22] to let websites know I want to minimise my data usage. This is also the header set by Chrome for Androids Lite mode, which Ill cover in more detail later.
I downloaded [TripMode][23]; an application for Mac which gives you control over which apps on your Mac can access the internet. Any other applications internet access is automatically blocked.
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/6964df33-b6ca-4fe0-bbc3-8b9f3eb525cf/trip-mode.png>You can enable/disable individual apps from connecting to the internet with TripMode. I enabled Chrome. ([Large preview][24])
How far do I predict my 50 MB budget will take me? With the [average weight of a web page being almost 1.7 MB][25], that suggests Ive got around 29 pages in my budget, although probably a few more than that if Im able to stay on the same sites and leverage browser caching.
Throughout the experiment I will suggest performance tips to speed up the [first contentful paint][26] and perceived loading time of the page. Some of these tips may not affect the amount of data transferred directly, but do generally involve deferring the download of less important resources, which on slow connections may mean the resources are never downloaded and data is saved.
### The Experiment
Without any further ado, I loaded google.com, using 402 KB of my budget and spending $0.03 (around 1% of my Zimbabwe budget).
[![402 KB transferred, 1.1 MB resources, 24 requests][27]][28]402 KB transferred, 1.1 MB resources, 24 requests. ([Large preview][28])
All in all, not a bad page size, but I wondered where those 24 network requests were coming from and whether or not the page could be made any lighter.
#### Google Homepage — DOM
[![][29]][30]Chrome devtools screenshot of the DOM, where Ive expanded one inline `style` tag. ([Large preview][30])
Looking at the page markup, there are no external stylesheets — all of the CSS is inline.
##### Performance Tip #1: Inline Critical CSS
This is good for performance as it saves the browser having to make an additional network request in order to fetch an external stylesheet, so the styles can be parsed and applied immediately for the first contentful paint. Theres a trade-off to be made here, as external stylesheets can be cached but inline ones cannot (unless you [get clever with JavaScript][31]).
The general advice is for your [critical styles][32] (anything [above-the-fold][33]) to be inline, and for the rest of your styling to be external and loaded asynchronously. Asynchronous loading of CSS can be achieved in [one remarkably clever line of HTML][34]:
```
<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">
```
The devtools show a prettified version of the DOM. If you want to see what was actually downloaded to the browser, switch to the Sources tab and find the document.
[![A wall of minified code.][35]][36]Switching to Sources and finding the index shows the raw HTML that was delivered to the browser. What a mess! ([Large preview][36])
You can see there is a LOT of inline JavaScript here. Its worth noting that it has been uglified rather than merely minified.
##### Performance Tip #2: Minify And Uglify Your Assets
Minification removes unnecessary spaces and characters, but uglification actually mangles the code to be shorter. The tell-tale sign is that the code contains short, machine-generated variable names rather than untouched source code. This is good as it means the script is smaller and quicker to download.
Even so, inline scripts look to be roughly 120 KB of the 210 KB page resource (about half the 60 KB gzipped size). In addition, there are five external JavaScript files amounting to 291 KB of the 402 KB downloaded:
[![Network tab of DevTools showing the external javascript files][37]][38]Five external JavaScript files in the Network tab of the devtools. ([Large preview][38])
This means that JavaScript accounts for about 80 percent of the overall page weight.
This isnt useless JavaScript; Google has to have some in order to display suggestions as you type. But I suspect a lot of it is tracking code and advertising setup.
For comparison, I disabled JavaScript and reloaded the page:
[![DevTools showing only 5 network requests][39]][40]The disabled JS version of Google search was only 102 KB and had just 5 network requests. ([Large preview][40])
The JS-disabled version of Google search is just 102 KB, as opposed to 402 KB. Although Google cant provide autosuggestions under these conditions, the site is still functional, and Ive just cut my data usage down to a quarter of what it was. If I really did have to limit my data usage in the long term, one of the first things Id do is disable JavaScript. [Its not as bad as it sounds][41].
##### Performance Tip #3: Less Is More
Inlining, uglifying and minifying assets is all well and good, but the best performance comes from not sending down the assets in the first place.
* Before adding any new features, do you have a [performance budget][42] in place?
* Before adding JavaScript to your site, can your feature be accomplished using plain HTML? (For example, [HTML5 form validation][43]).
* Before pulling a large JavaScript or CSS library into your application, use something like [bundlephobia.com][44] to measure how big it is. Is the convenience worth the weight? Can you accomplish the same thing using vanilla code at a much smaller data size?
#### Analysing The Resource Info
Theres a lot to unpack here, so lets get cracking. Ive only got 50 MB to play with, so Im going to milk every bit of this page load. Settle in for a short Chrome Devtools tutorial.
402 KB transferred, but 1.1 MB of resources: what does that actually mean?
It means 402 KB of content was actually downloaded, but in its compressed form (using a compression algorithm such as [gzip or brotli][45]). The browser then had to do some work to unpack it into something meaningful. The total size of the unpacked data is 1.1 MB.
This unpacking isnt free — [there are a few milliseconds of overhead in decompressing the resources][46]. But thats a negligible overhead compared to sending 1.1MB down the wire.
##### Performance Tip #4: Compress Text-based Assets
As a general rule, always compress your assets, using something like gzip. But dont use compression on your images and other binary files — you should optimize these in advance at source. Compression could actually end up [making them bigger][47].
And, if you can, [avoid compressing files that are 1500 bytes or smaller][47]. The smallest TCP packet size is 1500 bytes, so by compressing to, say, 800 bytes, you save nothing, as its still transmitted in the same byte packet. Again, the cost is negligible, but wastes some compression CPU time on the server and decompression CPU time on the client.
Now back to the Network tab in Chrome: lets dig into those priorities. Notice that resources have priority “Highest” to “Lowest” — these are the browsers best guess as to what are the more important resources to download. The higher the priority, the sooner the browser will try to download the asset.
##### Performance Tip #5: Give Resource Hints To The Browser
The browser will guess at what the highest priority assets are, but you can [provide a resource hint][48] using the `<link rel="preload">` tag, instructing the browser to download the asset as soon as possible. Its a good idea to preload fonts, logos and anything else that appears above the fold.
Lets talk about caching. Im going to hold ALT and right-click to change my column headers to unlock some more juicy information. Were going to check out Cache-Control.
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88384090-3ed6-482c-a2b4-7aeb057c3b19/cache-control.png>There are lots of interesting fields tucked away behind ALT. ([Large preview][49])
Cache-Control denotes whether or not a resource can be cached, how long it can be cached for, and what rules it should follow around [revalidating][50]. Setting proper cache values is crucial to keeping the data cost of repeat visits down.
##### Performance Tip #6: Set cache-control Headers On All Cacheable Assets
Note that the cache-control value begins with a directive of `public` or `private`, followed by an expiration value (e.g. `max-age=31536000`). What does the directive mean, and why the oddly specific `max-age` value?
[![Screenshot of Google network tab with cache-control column visible][51]][52]A mixture of max-age values and public/private. ([Large preview][52])
The value `31536000` is the number of seconds there are in a year, and is the theoretical maximum value allowed by the cache-control specification. It is common to see this value applied to all static assets and effectively means “this resource isnt going to change”. In practice, [no browser is going to cache for an entire year][53], but it will cache the asset for as long as makes sense.
To explain the public/private directive, we must explain the two main caches that exist off the server. First, there is the traditional browser cache, where the resource is stored on the users machine (the client). And then there is the CDN cache, which sits between the client and the server; resources are cached at the CDN level to prevent the CDN from requesting the resource from the origin server over and over again.
A `Cache-Control` directive of `public` allows the resource to be cached in both the client and the CDN. A value of `private` means only the client can cache it; the CDN is not supposed to. This latter value is typically used for pages or assets that exist behind authentication, where it is fine to be cached on the client but we wouldnt want to leak private information by caching it in the CDN and delivering it to other users.
[![Screenshot of Google logo cache-control setting: private, max-age=31536000][54]][55]A mixture of max-age values and public/private. ([Large preview][55])
One thing that got my attention was that the Google logo has a cache control of “private”. Other images on the page do have a public cache, and I dont know why the logo would be treated any differently. If you have any ideas, let me know in the comments!
I refreshed the page and most of the resources were served from cache, apart from the page itself, which as youve seen already is `private, max-age=0`, meaning it cannot be cached. This is normal for dynamic web pages where it is important that the user always gets the very latest page when they refresh.
It was at this point I accidentally clicked on an Explanation URL in the devtools, which took me to the [network analysis reference][56], costing me about 5 MB of my budget. Oops.
### Google Dev Docs
4.2 MB of this new 5 MB page was down to images; specifically SVGs. The weightiest of these was 186 KB, which isnt particularly big — there were just so many of them, and they all downloaded at once.
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/39870718-c891-4d34-bd1b-74a9d28986a0/gif-scrolling-down-the-very-long-dev-docs-page.gif>This is a loooong page. All the images downloaded on page load. ([Large preview][57])
That 5 MB page load was 10% of my budget for today. So far Ive used 5.5 MB, including the no-JavaScript reload of the Google homepage, and spent $0.40. I didnt even mean to open this page.
What would have been a better user experience here?
##### Performance Tip #7: Lazy-load Your Images
Ordinarily, if I accidentally clicked on a link, I would hit the back button in my browser. Id have received no benefit whatsoever from downloading those images — what a waste of 4.2 MB!
Apart from video, where you generally know what youre getting yourself into, images are by far the biggest culprit to data usage on the web. A [study of the worlds top 500 websites][58] found that images take up to 53% of the average page weight. “This means they have a big impact on page-loading times and subsequently overall performance”.
Instead of downloading all of the images on page load, it is good practice to lazy-load the images so that only users who are engaged with the page pay the cost of downloading them. Users who choose not to scroll below the fold therefore dont waste any unnecessary bandwidth downloading images theyll never see.
Theres a great [css-tricks.com guide to rolling out lazy-loading for images][59] which offers a good balance between those on good connections, those on poor connections, and those with JavaScript disabled.
If this page had implemented lazy loading as per the guide above, each of the 38 SVGs would have been represented by a 1 KB placeholder image by default, and only loaded into view on scroll.
##### Performance Tip #8: Use The Right Format For Your Images
I thought that Google had missed a trick by not using [WebP][60], which is an image format that is 26% smaller in size compared to PNGs (with no loss in quality) and 25-34% smaller in size compared to JPEGs (and of a comparable quality). I thought Id have a go at converting SVG to WebP.
Converting to WebP did bring one of the SVGs down from 186 KB to just 65 KB, but actually, looking at the images side by side, the WebP came out grainy:
[![Comparison of the two images][61]][62]The SVG (left) is noticeably crisper than the WebP (right). ([Large preview][62])
I then tried converting one of the PNGs to WebP, which is supposed to be lossless and should come out smaller. However, the WebP output was *heavier* (127 KB, from 109 KB)!
[![Comparison of the two images][63]][64]The PNG (left) is a similar quality to the WebP (right) but is smaller at 109 KB compared to 127 KB. ([Large preview][64])
This surprised me. WebP isnt necessarily the silver bullet we think it is, and even Google have neglected to use it on this page.
So my advice would be: where possible, experiment with different image formats on a per-image basis. The format that keeps the best quality for the smallest size may not be the one you expect.
Now back to the DOM. I came across this:
Notice the `async` keyword on the Google analytics script?
[![Screenshot of performance analysis output of devtools][65]][66]Google analytics has low priority. ([Large preview][66])
Despite being one of the first things in the head of the document, this was given a low priority, as weve explicitly opted out of being a blocking request by using the `async` keyword.
A blocking request is one that stops the rendering of the page. A `<script>` call is blocking by default, stopping the parsing of the HTML until the script has downloaded, compiled and executed. This is why we traditionally put `<script>` calls at the end of the document.
##### Performance Tip #9: Avoid Writing Blocking Script Calls
By adding the `async` attribute to our `<script>` tag, were telling the browser not to stop rendering the page but to download the script in the background. If the HTML is still being parsed by the time the script is downloaded, the parsing is paused while the script is executed, and then resumed. This is significantly better than blocking the rendering as soon as `<script>` is encountered.
There is also a `defer` attribute, which is subtly different. `<script defer>` tells the browser to render the page while the script loads in the background, and even if the HTML is still being parsed by the time the script is downloaded, the script must wait until the page is rendered before it can be executed. This makes the script completely non-blocking. Read “[Efficiently load JavaScript with defer and async][67]” for more information.
Anyway, enough Google dissecting. Its time to try out another site. Ive still got almost 45 MB of my budget left!
## Amazon
The Amazon homepage loaded with a total weight of about 6 MB. One of these was a 587 KB image that I couldnt even find on the page. This was a PNG, presumably to have crisp text, but on a photographic background — a classic combination thats terrible for performance.
[![image of spanners with overlaid text: Hands-on time. Discover our tool selection for your car][68]][69]This grainy image used over 1% of my budget. ([Large preview][69])
In fact, there were a few several-hundred-kilobyte images in my network tab that I couldnt actually see on the page. I suspect a misconfiguration somewhere on Amazon, but these invisible images combined chewed through at least 1 MB of my data.
How about the hero image? Its the main image on the page, and its only 94 KB transferred — but it could be reduced in size by about 15% if it were cropped directly around the text and footballers. We could then apply the same background color in CSS as is in the image. This has the additional advantage of being resizable down to smaller screens whilst retaining legibility of text.
Ive said it once, and Ill say it again: **optimising and lazy-loading your images is the single biggest benefit you can make to the page weight of your site**.
> Optimizing images provided, by far, the most significant data reduction. You can make the case JavaScript is a bigger deal for overall performance, but not data reduction. Optimizing or removing images is the safest way of ensuring a much lighter experience and thats the primary optimization Data Saver relies on.
> — Tim Kadlec, [Making Sense of Chrome Lite Pages][70]
To be fair to Amazon, if I resize the browser to a mobile size and refresh the page, the site is optimized for mobile and the total page weight is only 2.1 MB.
But this brings me onto my next point…
##### Performance Tip #10: Dont Make Assumptions About Data Connections
Its difficult to detect if someone on a desktop is on a broadband connection or is tethering through a data-limited dongle or mobile. Many people work on the train like that, or live in an area where broadband infrastructure is poor but mobile signal is strong. In Amazons case, there is room to make some big data savings on the desktop site and we shouldnt get complacent just because the screen size suggests Im not on a mobile device.
Yes, we should expect a larger page load if our viewport is desktop sized as the images will be larger and better optimized for the screen than a grainier mobile one. But the page shouldnt be orders of magnitude bigger.
Moreover, I was sending the `Save-Data` header with my request. This header [explicitly indicates a preference for reduced data usage][71], and I hope more websites start to take notice of it in the future.
The initial desktop load may have been 6 MB, but after sitting and watching it for a minute it had climbed to 8.6 MB as the lower-priority resources and event tracking kicked into action. This page weight includes almost 1.7 MB of minified JavaScript. I dont even want to begin to look at that.
##### Performance Tip #11: Use Web Workers For Your JavaScript
Which would be worse — 1.7 MB of JavaScript or 1.7 MB of images? The answer is JavaScript: the two assets are not equivalent when it comes to performance.
> A JPEG image needs to be decoded, rasterized, and painted on the screen. A JavaScript bundle needs to be downloaded and then parsed, compiled, executed —and there are a number of other steps that an engine needs to complete. Be aware that these costs are not quite equivalent.
> — Addy Osmani, The Cost of JavaScript in 2018
If you must ship this much JavaScript, try [putting it in a web worker][72]. This keeps the bulk of JavaScript off the main thread, which is now freed up for repainting the UI, helping your web page to stay responsive on low-powered devices.
Im now about 15.5 MB into my budget, and have spent $1.14 of my Zimbabwe data budget. Id have had to have worked for half a day as a teacher to earn the money to get this far.
### Pinterest
Ive heard good things about Pinterests performance, so I decided to put it to the test.
[![A staggering 327 requests, making 6.1 MB of data.][73]][74]A staggering 327 requests, making 6.1 MB of data. ([Large preview][74])
Perhaps this isnt the fairest of tests; I was taken to the sign-in page, upon which an asynchronous process found I was logged into Facebook and logged me in automatically. The page loaded relatively quickly, but the requests crept up as more and more content was preloaded.
However, I saw that on subsequent page loads, the service worker surfaced much of the content — saving about half of the page weight:
[![8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache.][75]][76]8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache. ([Large preview][76])
The Pinterest site is a progressive web app; it installed a service worker to manually handle caching of CSS and JS. I could now turn off my WiFi and continue to use the site (albeit not very usefully):
<https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_2000/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/188acdb0-1ec0-4404-bcdb-b6cb653c5dcc/loading-spinner-and-message-saying-you-re-not-connected-to-the-internet.png>You cant do much when youre offline. ([Large preview][77])
##### Performance Tip #12: Use Service Workers To Provide Offline Support
Wouldnt it be great if I only had to load a website once over network, and now get all the information I need even if Im offline?
A great example would be a website that shows the weather forecast for the week. I should only need to download that page once. If I turn off my mobile data and subsequently go back to the page at some point, it should be able to serve the last known content to me. If I connect to the internet again and load the page, I would get a more up to date forecast, but static assets such as CSS and images should still be served locally from the service worker.
This is possible by setting up a [service worker with a good caching strategy][78] so that cached pages can be re-accessed offline. The [lodash documentation website][79] is a nice example of a service worker in the wild:
[![Screenshot of devtools showing 'ServiceWorker' next to each request][80]][81]The Lodash docs work offline. ([Large preview][81])
Content that rarely updates and is likely to be used quite regularly is a perfect candidate for service worker treatment. Dynamic sites with ever-changing news feeds arent quite so well suited for offline experiences, but can still benefit.
[![Screenshot of Chris Ashton profile on Pinterest][82]][83]The second Pinterest page load was 443 KB. ([Large preview][83])
Service workers can truly save the day when youre on a tight data budget. Im not convinced the Pinterest experience was the most optimal in terms of data usage subsequent pages were around the 0.5 MB mark even on pages with few images — but letting your JavaScript handle page requests for you and keeping the same navigational elements in place can be very performant. The BBC manages a [transfer size of just 3.1 KB][84] for return-visits to articles that are renderable via the single page application.
So far, Pinterest alone has chewed through 14 MB, which means Ive blown around 30 MB of my budget, or $2.20 (almost a days wages) of my Zimbabwe budget.
Id better be careful with my final 20 MB… but wheres the fun in that?
### Gamespot
I picked this one because it felt noticeably sluggish on my mobile in the past and I wanted to dig into the reasons why. Sure enough, loading the homepage consumes 8.5 MB of data.
[![Screenshot of devtools alongside homepage][85]][86]The Gamespot homepage trickled up to 8.5 MB, and a whopping 347 requests. ([Large preview][86])
6.5 MB of this was down to an autoplaying video halfway down the page, which — to be fair — didnt appear to download until I began scrolling. Nevertheless…
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/000d9d1e-05ea-4de0-b2d2-f5228ad75653/the-video-is-clipped-off-screen.gif>The video is clipped off-screen. ([Large preview][87])
I could only see half the video in my viewport — the right hand side was clipped. It was also 30 seconds long, and I would wager that most people wont sit and watch the whole thing. This single asset more than tripled the size of the page.
##### Performance Tip #13: Dont Preload Video
As a rule, unless your sites primary mode of communication is video, dont preload it.
If youre YouTube or Netflix, its reasonable to assume that someone coming to your page will want the video to auto-load and auto-play. There is an expectation that the video will chew through some data, but that its a fair exchange for the content. But if youre a site whose primary medium is text and image — and you just happen to offer additional video content — then dont preload the video.
Think of news articles with embedded videos. Many users only want to skim the article headline before moving on to their next thing. Others will read the article but ignore any embeds. And others will diligently click and watch each embedded video. We shouldnt hog the bandwidth of every user on the assumption that theyre going to want to watch these videos.
To reiterate: [users dont like autoplaying video][88]. As developers we only do it because our managers tell us to, and they only tell us to do it because all the coolest apps are doing it, and the coolest apps are only doing it because video ads generate 20 to 50 times more revenue than traditional ads. Google Chrome has started [blocking autoplay videos for some sites][89], based on personal preferences, so even if you develop your site to autoplay video, theres no guarantee thats the experience your users are getting.
If we agree that its a good idea to make video an opt-in experience (click to play), we can take it a step further and make it click to load too. That means mocking a video placeholder image with a play button over it, and only downloading the video when you click the play button. People on fast connections should notice no difference in buffer speed, and people on slow connections will appreciate how fast the rest of your site loaded because it didnt have to preload a large video file.
Anyway, back to Gamespot, where I was indeed forced to preload a large video file I ended up not watching. I then clicked through to a [game review page][90] that weighed another 8.5 MB, this time with 5.4 MB of video, before I even started scrolling down the page.
What was really galling was when I looked at what the video actually was. It was an [advert for a Samsung TV][91]! This advert cost me $0.40 of my Zimbabwe wages. Not only was it pre-loaded, but it also didnt end up playing anywhere as far as Im aware, so I never actually saw it.
[![Screenshot of the offending request][92]][93]This advert wasted 5.4 MB of my precious data. ([Large preview][93])
The real video — the gameplay footage (in other words, the content) — wasnt actually loaded until I clicked on it. And that ploughed through my remaining data in seconds.
Thats it. Thats my 50 MB gone. Ill need to work another 1.5 days as a Zimbabwean schoolteacher to repeat the experience.
##### Performance Tip #14: Optimize For First Page Load
Whats striking is that I used 50 MB of data and in most cases, I only visited one or two pages on any given site. If you think about it, this is true of most user journeys today.
Think about the last time you Googled something. You no doubt clicked on the first search result. If you got your answer, you closed the tab, or else you hit the back button and moved onto the next search result.
With the exception of a few so-called destination sites such as Facebook or YouTube, where users habitually go as a starting point for other activities, the majority of user journeys are ephemeral. We stumble across random sites to get the answers to our questions, never to return to those sites again.
Web development practices are heavily skewed towards optimising for repeat visitors. “Cache these assets — theyll come in handy later”. “Pre-load this onward journey, in case the user clicks to read more”. “Subscribe to our mailing list”.
Instead, I believe we should optimize heavily for one-off visitors. Call it a controversial opinion, but maybe caching isnt really all that important. How important can a cached resource that never gets surfaced again be? And perhaps users arent actually going to subscribe to your mailing list after reading just the one article, so downloading the JavaScript and CSS for the mail subscription modal is both a waste of data and an [annoying user experience][94].
### The Decline Of Proxy Browsers
I had hoped to try out Opera Mini as part of this experiment. Opera Mini is a mobile web browser which proxies web pages through Operas compression servers. It accounts for 1.42% of global traffic as of June 2019, according to caniuse.com.
Opera Mini claims to save up to 90% of data by doing some pretty intensive transcoding. HTML is parsed, images are compressed, styling is applied, and a certain amount of JavaScript is executed on Operas servers. The server doesnt respond with HTML as you might expect — it actually transcodes the data into Opera Binary Markup Language (OBML), which is progressively loaded by Opera Mini on the device. It renders what is essentially an interactive snapshot of the web page — think of it as a PDF with hyperlinks inside it. Read Tiffany Browns excellent article, “[Opera Mini and JavaScript][95]” for a technical deep-dive.
It would have been a perfect way to eek my 50 MB budget as far as possible. Unfortunately, Opera Mini is no longer available on iOS in the UK. Attempting to visit it in the [app store][96] throws an error:
Its still available “[in some markets][97]” but reading between the lines, Opera will be phasing out Opera Mini for its new app — Opera Touch — which [doesnt have any data-saving functionality][98] apart from the ability to natively block ads.
Opera desktop used to have a Turbo mode, acting as a traditional proxy server (returning a HTML document instead of OBML), applying data-saving techniques but less intensively than Opera Mini. According to Opera, JavaScript continues to work and “you get all the videos, photos and text that you normally would, but you eat up less data and load pages faster”. However, [Opera quietly removed Turbo mode in v60][99] earlier this year, and Opera Touch [doesnt have a Turbo mode][100] either. Turbo mode is currently only available on Opera for Android.
Android is where all the action is in terms of data-saving technology. Chrome offers a Lite mode on its mobile browser for Android, which is not available for iPhones or iPads because of “[platform constraints][101]“. Outside of mobile, Google used to provide a Data Saver extension for Chrome desktop, but [this was canned in April][102].
Lite mode for Chrome Android can be forcibly enabled, or automatically kicks in when the networks effective connection type is 2G or worse, or when Chrome estimates the page will take more than 5 seconds to reach first contentful paint. Under these conditions, [Chrome will request the lite version of the HTTPS URL as cached by Googles servers][103], and display this stripped-down version inside the users browser, alongside a “Lite” marker in the address bar.
[![Screenshot showing button in toolbar denoting 'Lite' mode][104]][105]Lite mode on Chrome for Android. Image: Google. ([Large preview][105])
Id love to try it out — apparently it [disables scripts][106], [replaces images with placeholders][107], [prevents loading of non-critical resources][108] and [shows offline copies of pages][109] if one is available on the device. This [saves up to 60% of data][110]. However, [it isnt available in private (Incognito) mode][101], which hints at some of the privacy concerns surrounding proxy browsers.
Lite mode shares the HTTPS URL with Google, therefore it makes sense that this mode isnt available in Incognito. However other information such as cookies, login information, and personalised page content is not shared with Google — [according to ghacks.net][110] — and “never breaks secure connections between Chrome and a website”. One wonders why seemingly none of these data-saving services are allowed on iOS (and there is [no news as to whether Lite mode will ever become available on iOS][111]).
Data saver proxies require a great deal of trust; your browsing activity, cookies and other sensitive information are entrusted to some server, often in another country. Many proxies simply wont work anymore because a lot of sites have moved to HTTPS, meaning initiatives such as Turbo mode have become a largely “[useless feature][112]“. HTTPS prevents this kind of man-in-the-middle behaviour, which is a good thing, although it has meant the demise of some of these proxy services and has made sites [less accessible to those on poor connections][113].
I was unable to find any OSX or iOS compatible data-saving tool except for [Bandwidth Hero][114] for Firefox (which requires setting up your own data compression service — far beyond the technical capabilities of most users!) and [skyZIP Proxy][115] (which, last updated in 2017 and riddled with typos, I just couldnt bring myself to trust).
### Conclusion
Reducing the data footprint of your website goes hand in hand with improving frontend performance. It is the single most reliable thing you can do to speed up your site.
In addition to the cost of data, there are lots of good reasons to focus on performance, as described in a [GOV.UK blog post on the subject][116]:
* [53% of users will abandon a mobile site][117] if it takes more than 3 seconds to load.
* [People have to concentrate 50% more][118] when trying to complete a simple task on a website using a slow connection.
* More performant web pages are better for the battery life of the users device, and typically require less power on the server to deliver. A performant site is good for the environment.
We dont have the power to change the global cost of data inequality. But we do have the power to lessen its impact, improving the experience for everyone in the process.
--------------------------------------------------------------------------------
via: https://www.smashingmagazine.com/2019/07/web-on-50mb-budget/
作者:[Chris Ashton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.smashingmagazine.com/author/chrisbashton
[b]: https://github.com/lujun9972
[1]: https://www.smashingmagazine.com/author/chrisbashton
[2]: https://d33wubrfki0l68.cloudfront.net/1dbc465f56f3a812f09666f522fa226efd947cfa/a4d9f/images/smashing-cat/newsletter-fish-cat.svg
[3]: https://www.smashingmagazine.com/the-smashing-newsletter/
[4]: https://www.smashingmagazine.com/the-smashing-newsletter/
[5]: https://d33wubrfki0l68.cloudfront.net/a2b586e0ae8a08879457882013f0015fa9c31f7c/9e355/images/drop-caps/t.svg
[6]: https://d33wubrfki0l68.cloudfront.net/b5449482a65c611116580c9dfbf75c686b132629/e2b2f/images/drop-caps/character-7.svg
[7]: https://www.smashingmagazine.com/2019/03/web-on-internet-explorer-ie8/
[8]: https://www.cable.co.uk/mobiles/worldwide-data-pricing/
[9]: https://s3-eu-west-1.amazonaws.com/assets.cable.co.uk/mobile-data-cost/cost-of-a-gigabyte-research-method.pdf
[10]: https://www.opensignal.com/sites/opensignal-com/files/data/reports/global/data-2018-11/state_of_wifi_vs_mobile_opensignal_201811.pdf
[11]: https://www.opensignal.com/sites/opensignal-com/files/data/reports/global/data-2019-05/the_state_of_mobile_experience_may_2019_0.pdf
[12]: https://www.vox.com/recode/2019/7/12/20681214/mobile-speeds-slow-ookla-5g
[13]: https://www.cable.co.uk/broadband/pricing/worldwide-comparison/
[14]: https://www.cable.co.uk/broadband/speed/worldwide-speed-league/
[15]: https://www.bbc.co.uk/news/uk-wales-48982460
[16]: https://twitter.com/ChrisBAshton/status/1138726856872607744
[17]: https://www.timeslive.co.za/news/africa/2019-02-06-striking-zimbabwean-teachers-earn-equivalent-of-just-r700-a-month/
[18]: https://www.dol.gov/general/topic/wages/minimumwage
[19]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/491b7575-818f-4517-acf1-fd209a44aa74/01-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[20]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/491b7575-818f-4517-acf1-fd209a44aa74/01-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[21]: https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj
[22]: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/save-data/
[23]: https://www.tripmode.ch/
[24]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/58e4e027-f544-4068-9c0f-47a717885bc9/screenshot-of-tripmode-settings-chrome-is-enabled-mail-is-disabled.png
[25]: https://httparchive.org/reports/page-weight
[26]: https://web.dev/first-contentful-paint
[27]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/71c8184e-85af-41d6-9253-1a2d74cdb5ec/02-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[28]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/71c8184e-85af-41d6-9253-1a2d74cdb5ec/02-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[29]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69e02a99-4481-46bd-b2a9-e5411991a865/03-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[30]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69e02a99-4481-46bd-b2a9-e5411991a865/03-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[31]: https://github.com/ChrisBAshton/inline-cacher
[32]: https://web.dev/extract-critical-css
[33]: https://www.abtasty.com/blog/above-the-fold/
[34]: https://www.filamentgroup.com/lab/load-css-simpler/
[35]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8cf45731-08f8-4e31-b0c7-5dec6b235e41/a-wall-of-minified-code.png
[36]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8cf45731-08f8-4e31-b0c7-5dec6b235e41/a-wall-of-minified-code.png
[37]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/dbc81fd5-f81e-4f44-8f2b-c1065ad26ed3/five-external-javascript-files.png
[38]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/dbc81fd5-f81e-4f44-8f2b-c1065ad26ed3/five-external-javascript-files.png
[39]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8fdcaa1a-0691-4ba2-a9e3-b2b236af5d88/disabled-js-version-of-google-search.png
[40]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8fdcaa1a-0691-4ba2-a9e3-b2b236af5d88/disabled-js-version-of-google-search.png
[41]: https://www.smashingmagazine.com/2018/05/using-the-web-with-javascript-turned-off/
[42]: https://web.dev/performance-budgets-101
[43]: https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Form_validation#Using_built-in_form_validation
[44]: https://bundlephobia.com/
[45]: https://medium.com/oyotech/how-brotli-compression-gave-us-37-latency-improvement-14d41e50fee4
[46]: https://stackoverflow.com/questions/16803876/browser-gzip-decompression-overhead-speed/16816099
[47]: https://www.itworld.com/article/2693941/why-it-doesn-t-make-sense-to-gzip-all-content-from-your-web-server.html
[48]: https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content
[49]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8831ed71-56a8-4431-a39f-298dd4bc072c/screenshot-showing-how-to-display-cache-control-information.png
[50]: https://traffic-control-cdn.readthedocs.io/en/latest/basics/cache_revalidation.html
[51]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/06e8d48f-193e-4370-b41e-7d68083bd0fe/screenshot-of-google-network-tab-with-cache-control-column-visible.png
[52]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/06e8d48f-193e-4370-b41e-7d68083bd0fe/screenshot-of-google-network-tab-with-cache-control-column-visible.png
[53]: https://ashton.codes/set-cache-control-max-age-1-year/
[54]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1010e6d4-6f0c-4d95-a088-67bf4e4b1b2c/screenshot-of-google-logo-cache-control-setting.png
[55]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1010e6d4-6f0c-4d95-a088-67bf4e4b1b2c/screenshot-of-google-logo-cache-control-setting.png
[56]: https://developers.google.com/web/tools/chrome-devtools/network/reference
[57]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/39870718-c891-4d34-bd1b-74a9d28986a0/gif-scrolling-down-the-very-long-dev-docs-page.gif
[58]: https://blog.uploadcare.com/image-optimization-and-performance-score-23516ebdd31d
[59]: https://css-tricks.com/tips-for-rolling-your-own-lazy-loading/
[60]: https://developers.google.com/speed/webp/
[61]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/64eeb0ac-96ac-44ca-8164-59749f3b850f/the-svg-left-is-noticeably-crisper-than-the-webp.png
[62]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/64eeb0ac-96ac-44ca-8164-59749f3b850f/the-svg-left-is-noticeably-crisper-than-the-webp.png
[63]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/09abae72-2997-4230-a99e-52eb120006c5/the-png-left-is-a-similar-quality-to-the-webp.png
[64]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/09abae72-2997-4230-a99e-52eb120006c5/the-png-left-is-a-similar-quality-to-the-webp.png
[65]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/0b2a8a71-6573-4a7b-8008-d73a1c54f318/screenshot-of-performance-analysis-output-of-devtools.png
[66]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/0b2a8a71-6573-4a7b-8008-d73a1c54f318/screenshot-of-performance-analysis-output-of-devtools.png
[67]: https://flaviocopes.com/javascript-async-defer/
[68]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/fe47c512-b890-4e8a-bebd-0e0529cc565b/image-of-spanners-with-overlaid-text.png
[69]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/fe47c512-b890-4e8a-bebd-0e0529cc565b/image-of-spanners-with-overlaid-text.png
[70]: https://timkadlec.com/remembers/2019-03-14-making-sense-of-chrome-lite-pages/
[71]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Save-Data
[72]: https://dassur.ma/things/when-workers/
[73]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e32acbef-01c4-4cbe-b5c1-6e840519f1c9/06-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[74]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e32acbef-01c4-4cbe-b5c1-6e840519f1c9/06-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[75]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88853603-7b89-4ae1-8969-db19fac4a95d/07-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[76]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88853603-7b89-4ae1-8969-db19fac4a95d/07-i-used-the-web-for-a-day-on-a-50-mb-budget.png
[77]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/188acdb0-1ec0-4404-bcdb-b6cb653c5dcc/loading-spinner-and-message-saying-you-re-not-connected-to-the-internet.png
[78]: https://serviceworke.rs/caching-strategies.html
[79]: https://lodash.com/docs/
[80]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/3ae779a9-740f-4715-9e48-fd67a9a3a8ea/screenshot-of-devtools-showing-serviceworker-next-to-each-request.png
[81]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/3ae779a9-740f-4715-9e48-fd67a9a3a8ea/screenshot-of-devtools-showing-serviceworker-next-to-each-request.png
[82]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/bef23c7c-9ad0-4ac9-b646-4352bf3b34d6/screenshot-of-chris-ashton-profile-on-pinterest.png
[83]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/bef23c7c-9ad0-4ac9-b646-4352bf3b34d6/screenshot-of-chris-ashton-profile-on-pinterest.png
[84]: https://www.bbc.co.uk/news/articles/c5ll353v7y9o
[85]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8df6aa4b-4199-4fd0-b0fe-15f7c993796a/screenshot-of-devtools-alongside-homepage.png
[86]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8df6aa4b-4199-4fd0-b0fe-15f7c993796a/screenshot-of-devtools-alongside-homepage.png
[87]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/000d9d1e-05ea-4de0-b2d2-f5228ad75653/the-video-is-clipped-off-screen.gif
[88]: https://www.nytimes.com/2018/08/01/technology/personaltech/autoplay-video-fight-them.html
[89]: https://www.theverge.com/2018/5/3/17251104/google-chrome-66-autoplay-sound-videos-mute
[90]: https://www.gamespot.com/reviews/final-fantasy-xiv-shadowbringers-review-dancer-in-/1900-6417212/
[91]: https://static.sharethrough.com/sfp/hosted_video/DS843eHqTGfEfxMBvLMh1n6uyq/video.mp4
[92]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1d1b0ac0-1909-4542-b5e4-2096419ba635/screenshot-of-the-offending-request.png
[93]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1d1b0ac0-1909-4542-b5e4-2096419ba635/screenshot-of-the-offending-request.png
[94]: https://www.smashingmagazine.com/2019/06/web-designers-speed-mobile-websites/#2-stop-using-cumbersome-design-elements
[95]: https://dev.opera.com/articles/opera-mini-and-javascript/
[96]: https://apps.apple.com/app/id363729560
[97]: https://twitter.com/opera/status/1084736938312110080
[98]: https://www.guidingtech.com/opera-mini-vs-opera-touch-comparison-differences/
[99]: https://techdows.com/2019/06/opera-quietly-removed-turbo-mode-from-their-browser.html
[100]: https://forums.opera.com/topic/26886/no-turbo-mode-for-opera-touch
[101]: https://support.google.com/chrome/answer/2392284
[102]: https://venturebeat.com/2019/04/23/google-kills-chrome-data-saver-extension/
[103]: https://www.zdnet.com/article/google-announces-chrome-lite-pages-a-way-to-speed-up-https-sites-on-slow-connections/
[104]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69282232-f726-4760-9743-73373c5ee43f/chrome-lite-pages.png
[105]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69282232-f726-4760-9743-73373c5ee43f/chrome-lite-pages.png
[106]: https://www.chromestatus.com/feature/4775088607985664
[107]: https://www.chromestatus.com/feature/6072546726248448
[108]: https://www.chromestatus.com/feature/4510564810227712
[109]: https://www.chromestatus.com/feature/5076871637106688
[110]: https://www.ghacks.net/2019/04/24/google-deprecates-chrome-data-saver-extension-for-the-desktop/
[111]: https://www.phonearena.com/news/Google-Chrome-update-Data-Saver-Lite-mode-Android_id115558
[112]: https://forums.opera.com/topic/32749/turbo-mode-disappear-in-opera-60-0/3
[113]: https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-them-less-accessible/
[114]: https://addons.mozilla.org/en-US/firefox/addon/bandwidth-hero/
[115]: https://chrome.google.com/webstore/detail/skyzip-proxy/hbgknjagaclofapkgkeapamhmglnbphi
[116]: https://technology.blog.gov.uk/2019/04/18/why-we-focus-on-frontend-performance/
[117]: https://www.thinkwithgoogle.com/marketing-resources/data-measurement/mobile-page-speed-new-industry-benchmarks/
[118]: http://www.tecnostress.it/wp-content/uploads/2010/02/final_webstress_survey_report_229296.pdf
[119]: https://www.smashingmagazine.com/images/logo/logo--red.png

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Things You Didn't Know About GNU Readline)
[#]: via: (https://twobithistory.org/2019/08/22/readline.html)
[#]: author: (Two-Bit History https://twobithistory.org)
Things You Didn't Know About GNU Readline
======
I sometimes think of my computer as a very large house. I visit this house every day and know most of the rooms on the ground floor, but there are bedrooms Ive never been in, closets I havent opened, nooks and crannies that Ive never explored. I feel compelled to learn more about my computer the same way anyone would feel compelled to see a room they had never visited in their own home.
GNU Readline is an unassuming little software library that I relied on for years without realizing that it was there. Tens of thousands of people probably use it every day without thinking about it. If you use the Bash shell, every time you auto-complete a filename, or move the cursor around within a single line of input text, or search through the history of your previous commands, you are using GNU Readline. When you do those same things while using the command-line interface to Postgres (`psql`), say, or the Ruby REPL (`irb`), you are again using GNU Readline. Lots of software depends on the GNU Readline library to implement functionality that users expect, but the functionality is so auxiliary and unobtrusive that I imagine few people stop to wonder where it comes from.
GNU Readline was originally created in the 1980s by the Free Software Foundation. Today, it is an important if invisible part of everyones computing infrastructure, maintained by a single volunteer.
### Feature Replete
The GNU Readline library exists primarily to augment any command-line interface with a common set of keystrokes that allow you to move around within and edit a single line of input. If you press `Ctrl-A` at a Bash prompt, for example, that will jump your cursor to the very beginning of the line, while pressing `Ctrl-E` will jump it to the end. Another useful command is `Ctrl-U`, which will delete everything in the line before the cursor.
For an embarrassingly long time, I moved around on the command line by repeatedly tapping arrow keys. For some reason, I never imagined that there was a faster way to do it. Of course, no programmer familiar with a text editor like Vim or Emacs would deign to punch arrow keys for long, so something like Readline was bound to be created. Using Readline, you can do much more than just jump around—you can edit your single line of text as if you were using a text editor. There are commands to delete words, transpose words, upcase words, copy and paste characters, etc. In fact, most of Readlines keystrokes/shortcuts are based on Emacs. Readline is essentially Emacs for a single line of text. You can even record and replay macros.
I have never used Emacs, so I find it hard to remember what all the different Readline commands are. But one thing about Readline that is really neat is that you can switch to using a Vim-based mode instead. To do this for Bash, you can use the `set` builtin. The following will tell Readline to use Vim-style commands for the current shell:
```
$ set -o vi
```
With this option enabled, you can delete words using `dw` and so on. The equivalent to `Ctrl-U` in the Emacs mode would be `d0`.
I was excited to try this when I first learned about it, but Ive found that it doesnt work so well for me. Im happy that this concession to Vim users exists, and you might have more luck with it than me, particularly if you havent already used Readlines default command keystrokes. My problem is that, by the time I heard about the Vim-based interface, I had already learned several Readline keystrokes. Even with the Vim option enabled, I keep using the default keystrokes by mistake. Also, without some sort of indicator, Vims modal design is awkward here—its very easy to forget which mode youre in. So Im stuck at a local maximum using Vim as my text editor but Emacs-style Readline commands. I suspect a lot of other people are in the same position.
If you feel, not unreasonably, that both Vim and Emacs keyboard command systems are bizarre and arcane, you can customize Readlines key bindings and make them whatever you like. This is not hard to do. Readline reads a `~/.inputrc` file on startup that can be used to configure various options and key bindings. One thing Ive done is reconfigured `Ctrl-K`. Normally it deletes from the cursor to the end of the line, but I rarely do that. So Ive instead bound it so that pressing `Ctrl-K` deletes the whole line, regardless of where the cursor is. Ive done that by adding the following to `~/.inputrc`:
```
Control-k: kill-whole-line
```
Each Readline command (the documentation refers to them as _functions_) has a name that you can associate with a key sequence this way. If you edit `~/.inputrc` in Vim, it turns out that Vim knows the filetype and will help you by highlighting valid function names but not invalid ones!
Another thing you can do with `~/.inputrc` is create canned macros by mapping key sequences to input strings. [The Readline manual][1] gives one example that I think is especially useful. I often find myself wanting to save the output of a program to a file, which means that I often append something like `> output.txt` to Bash commands. To save some time, you could make this a Readline macro:
```
Control-o: "> output.txt"
```
Now, whenever you press `Ctrl-O`, youll see that `> output.txt` gets added after your cursor on the command line. Neat!
But with macros you can do more than just create shortcuts for strings of text. The following entry in `~/.inputrc` means that, every time I press `Ctrl-J`, any text I already have on the line is surrounded by `$(` and `)`. The macro moves to the beginning of the line with `Ctrl-A`, adds `$(`, then moves to the end of the line with `Ctrl-E` and adds `)`:
```
Control-j: "\C-a$(\C-e)"
```
This might be useful if you often need the output of one command to use for another, such as in:
```
$ cd $(brew --prefix)
```
The `~/.inputrc` file also allows you to set different values for what the Readline manual calls _variables_. These enable or disable certain Readline behaviors. You can use these variables to change, for example, how Readline auto-completion works or how the Readline history search works. One variable Id recommend turning on is the `revert-all-at-newline` variable, which by default is off. When the variable is off, if you pull a line from your command history using the reverse search feature, edit it, but then decide to search instead for another line, the edit you made is preserved in the history. I find this confusing because it leads to lines showing up in your Bash command history that you never actually ran. So add this to your `~/.inputrc`:
```
set revert-all-at-newline on
```
When you set options or key bindings using `~/.inputrc`, they apply wherever the Readline library is used. This includes Bash most obviously, but youll also get the benefit of your changes in other programs like `irb` and `psql` too! A Readline macro that inserts `SELECT * FROM` could be useful if you often use command-line interfaces to relational databases.
### Chet Ramey
GNU Readline is today maintained by Chet Ramey, a Senior Technology Architect at Case Western Reserve University. Ramey also maintains the Bash shell. Both projects were first authored by a Free Software Foundation employee named Brian Fox beginning in 1988. But Ramey has been the sole maintainer since around 1994.
Ramey told me via email that Readline, far from being an original idea, was created to implement functionality prescribed by the POSIX specification, which in the late 1980s had just been created. Many earlier shells, including the Korn shell and at least one version of the Unix System V shell, included line editing functionality. The 1988 version of the Korn shell (`ksh88`) provided both Emacs-style and Vi/Vim-style editing modes. As far as I can tell from [the manual page][2], the Korn shell would decide which mode you wanted to use by looking at the `VISUAL` and `EDITOR` environment variables, which is pretty neat. The parts of POSIX that specified shell functionality were closely modeled on `ksh88`, so GNU Bash was going to have to implement a similarly flexible line-editing system to stay compliant. Hence Readline.
When Ramey first got involved in Bash development, Readline was a single source file in the Bash project directory. It was really just a part of Bash. Over time, the Readline file slowly moved toward becoming an independent project, though it was not until 1994 (with the 2.0 release of Readline) that Readline became a separate library entirely.
Readline is closely associated with Bash, and Ramey usually pairs Readline releases with Bash releases. But as I mentioned above, Readline is a library that can be used by any software implementing a command-line interface. And its really easy to use. This is a simple example, but heres how you would you use Readline in your own C program. The string argument to the `readline()` function is the prompt that you want Readline to display to the user:
```
#include <stdio.h>
#include <stdlib.h>
#include "readline/readline.h"
int main(int argc, char** argv)
{
char* line = readline("my-rl-example> ");
printf("You entered: \"%s\"\n", line);
free(line);
return 0;
}
```
Your program hands off control to Readline, which is responsible for getting a line of input from the user (in such a way that allows the user to do all the fancy line-editing things). Once the user has actually submitted the line, Readline returns it to you. I was able to compile the above by linking against the Readline library, which I apparently have somewhere in my library search path, by invoking the following:
```
$ gcc main.c -lreadline
```
The Readline API is much more extensive than that single function of course, and anyone using it can tweak all sorts of things about the librarys behavior. Library users can even add new functions that end users can configure via `~/.inputrc`, meaning that Readline is very easy to extend. But, as far as I can tell, even Bash ultimately calls the simple `readline()` function to get input just as in the example above, though there is a lot of configuration beforehand. (See [this line][3] in the source for GNU Bash, which seems to be where Bash hands off responsibility for getting input to Readline.)
Ramey has now worked on Bash and Readline for well over a decade. He has never once been compensated for his work—he is and has always been a volunteer. Bash and Readline continue to be actively developed, though Ramey said that Readline changes much more slowly than Bash does. I asked Ramey what it was like being the sole maintainer of software that so many people use. He said that millions of people probably use Bash without realizing it (because every Apple device runs Bash), which makes him worry about how much disruption a breaking change might cause. But hes slowly gotten used to the idea of all those people out there. He said that he continues to work on Bash and Readline because at this point he is deeply invested and because he simply likes to make useful software available to the world.
_You can find more information about Chet Ramey at [his website][4]._
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][5] on Twitter or subscribe to the [RSS feed][6] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> Please enjoy my long overdue new post, in which I use the story of the BBC Micro and the Computer Literacy Project as a springboard to complain about Codecademy.<https://t.co/PiWlKljDjK>
>
> — TwoBitHistory (@TwoBitHistory) [March 31, 2019][7]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2019/08/22/readline.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://tiswww.case.edu/php/chet/readline/readline.html
[2]: https://web.archive.org/web/20151105130220/http://www2.research.att.com/sw/download/man/man1/ksh88.html
[3]: https://github.com/bminor/bash/blob/9f597fd10993313262cab400bf3c46ffb3f6fd1e/parse.y#L1487
[4]: https://tiswww.case.edu/php/chet/
[5]: https://twitter.com/TwoBitHistory
[6]: https://twobithistory.org/feed.xml
[7]: https://twitter.com/TwoBitHistory/status/1112492084383092738?ref_src=twsrc%5Etfw

View File

@ -1,146 +0,0 @@
Translating by robsean
4 Ways to Customize Xfce and Give it a Modern Look
======
**Brief: Xfce is a great lightweight desktop environment with one drawback. It looks sort of old. But you dont have to stick with the default looks. Lets see various ways you can customize Xfce to give it a modern and beautiful look.**
![Customize Xfce desktop envirnment][1]
To start with, Xfce is one of the most [popular desktop environments][2]. Being a lightweight DE, you can run Xfce on very low resource and it still works great. This is one of the reasons why many [lightweight Linux distributions][3] use Xfce by default.
Some people prefer it even on a high-end device stating its simplicity, easy of use and non-resource hungry nature as the main reasons.
[Xfce][4] is in itself minimal and provides just what you need. The one thing that bothers is its look and feel which feel old. However, you can easily customize Xfce to look modern and beautiful without reaching the limit where a Unity/GNOME session eats up system resources.
### 4 ways to Customize Xfce desktop
Lets see some of the ways by which we can improve the look and feel of your Xfce desktop environment.
The default Xfce desktop environment looks something like this :
![Xfce default screen][5]
As you can see, the default Xfce desktop is kinda boring. We will use some themes, icon packs and change the default dock to make it look fresh and a bit revealing.
#### 1. Change themes in Xfce
The first thing we will do is pick up a theme from [xfce-look.org][6]. My favorite Xfce theme is [XFCE-D-PRO][7].
You can download the theme from [here][8] and extract it somewhere.
You can copy this extracted file to **.theme** folder in your home directory. If the folder is not present by default, you can create one and the same goes for icons which needs a **.icons** folder in the home directory.
Open **Settings > Appearance > Style** to select the theme, log out and login to see the change. Adwaita-dark from default is also a nice one.
![Appearance Xfce][9]
You can use any [good GTK theme][10] on Xfce.
#### 2. Change icons in Xfce
Xfce-look.org also provides icon themes which you can download, extract and put it in your home directory under **.icons** directory. Once you have added the icon theme in the .icons directory, go to **Settings > Appearance > Icons** to select that icon theme.
![Moka icon theme][11]
I have installed [Moka icon set][12] that looks awesome.
![Moka theme][13]
You can also refer to our list of [awesome icon themes][14].
##### **Optional: Installing themes through Synaptic**
If you want to avoid the manual search and copying of the files, install Synaptic Manager in your system. You can look for some best themes over web and icon sets, and using synaptic manager you can search and install it.
```
sudo apt-get install synaptic
```
**Searching and installing theme/icons through Synaptic**
Open synaptic and click on **Search**. Enter your desired theme, and it will display the list of matching items. Mark all the additional required changes and click on **Apply**. This will download the theme and then install it.
![Arc Theme][15]
Once done, you can open the **Appearance** option to select the desired theme.
In my opinion, this is not the best way to install themes in Xfce.
#### 3. Change wallpapers in Xfce
Again, the default Xfce wallpaper is not bad at all. But you can change the wallpaper to something that matches with your icons and themes.
To change wallpapers in Xfce, right click on the desktop and click on Desktop Settings. You can change the desktop background from your custom collection or the defaults one given.
Right click on the desktop and click on **Desktop Settings**. Choose **Background** from the folder option, and choose any one of the default backgrounds or a custom one.
![Changing desktop wallpapers][16]
#### 4. Change the dock in Xfce
The default dock is nice and pretty much does what it is for. But again, it looks a bit boring.
![Docky][17]
However, if you want your dock to be better and with a little more customization options, you can install another dock.
Plank is one of the simplest and lightweight docks and is highly configurable.
To install Plank use the command below:
`sudo apt-get install plank`
If Plank is not available in the default repository, you can install it from this PPA.
```
sudo add-apt-repository ppa:ricotz/docky
sudo apt-get update
sudo apt-get install plank
```
Before you use Plank, you should remove the default dock by right-clicking in it and under Panel Settings, clicking on delete.
Once done, go to **Accessory > Plank** to launch Plank dock.
![Plank][18]
Plank picks up icons from the one you are using. So if you change the icon themes, youll see the change is reflected in the dock also.
### Wrapping Up
XFCE is a lightweight, fast and highly customizable. If you are limited on system resource, it serves good and you can easily customize it to look better. Heres how my screen looks after applying these steps.
![XFCE desktop][19]
This is just with half an hour of effort. You can make it look much better with different themes/icons customization. Feel free to share your customized XFCE desktop screen in the comments and the combination of themes and icons you are using.
--------------------------------------------------------------------------------
via: https://itsfoss.com/customize-xfce/
作者:[Ambarish Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ambarish/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/xfce-customization.jpeg
[2]:https://itsfoss.com/best-linux-desktop-environments/
[3]:https://itsfoss.com/lightweight-linux-beginners/
[4]:https://xfce.org/
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/1-1-800x410.jpg
[6]:http://xfce-look.org
[7]:https://www.xfce-look.org/p/1207818/XFCE-D-PRO
[8]:https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/4.jpg
[10]:https://itsfoss.com/best-gtk-themes/
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/6.jpg
[12]:https://snwh.org/moka
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/11-800x547.jpg
[14]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/5-800x531.jpg
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/7-800x546.jpg
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/8.jpg
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/9.jpg
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/10-800x447.jpg

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dive into the life and legacy of Alan Turing: 5 books and more)
[#]: via: (https://opensource.com/article/19/8/who-was-alan-turing)
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
Dive into the life and legacy of Alan Turing: 5 books and more
======
Turing's theories had a huge impact on the development of the field of
computer science.
![Fire fist breaking glass][1]
Recently, Bank of England Governor, Mark Carney, [announced][2] that Alan Turning would be the new face on the UK£ 50 note. The name _Alan Turing_ should be familiar to anyone in open source communities: His theories had a huge impact on the development of the field of computer science, and his code-breaking work at [Bletchley Park][3] during World War II was the focus of the 2014 film, [_The Imitation Game_][4], which starred Benedict Cumberbatch as Alan Turing.
Another well-known fact about Turing was his conviction for "gross indecency" because of his homosexuality, and the posthumous [apology][5] and [pardon][6] issued over more a half a decade after Turings death.
But beyond all of this, who was Alan Turing?
Here are five books and archival material that delve deeply into the life and legacy of Alan Turing. Collectively, these resources cover his life, both professional and personal, and work others have done to build upon Turings ideas. Individually, or collectively, these works allow the reader to learn who Alan Turing was beyond just a few well-known, broad-stroke themes.
### Alan Turing: The Enigma
![Alan Turing: The Enigma][7]
One of the most expansive biographies of Alan Turing, [_Alan Turing: The Enigma_][8], by Andrew Hodges, states on its cover that it is the inspiration for the film _The Imitation Game_. Weighing in at over 750 pages, this is no quick read, but it covers much of Turings life. The only drawback is the fact that the first edition was published in 1983. Even the updated edition does not make use of information that has become declassified in the past few years.
Despite that, if you only read one book from this list, _Alan Turing: The Enigma_ is still an excellent choice. Hodgess work is the gold standard when it comes to Alan Turing biographies.
### The Imitation Game: Alan Turing Decoded
![The Imitation Game: Alan Turing Decoded][9]
_[The Imitation Game: Alan Turing Decoded][10]_, by Jim Ottaviani and illustrated by Leland Purvis, presents the life of Alan Turing as a graphic novel. Well told and partnered with lovely artwork, this book covers all the major facets of Alan Turings life but lacks the depth of a biography like Hodges.
That is not to say that there is anything wrong or deficient with Ottavianis writing, just that the graphic novel form requires a more streamlined narrative. For anyone wanting a quick introduction to Turing, this graphic novel is the quickest way to read an overview of Turings life and works.
### Prof: Alan Turing Decoded
![Prof: Alan Turing Decoded][11]
Written by Alan Turings nephew, Durmot Turing, _[Prof: Alan Turing Decoded][12]_ draws upon material from the family, plus declassified material that was not available when Hodges researched his book. This shorter biography provides a more personal look at Alan Turings life while still being scholarly.
Dermot Turing does an excellent job of telling the story of Alan Turing the man, not the myth born from public perceptions based on various dramatic interpretations. _Prof: Alan Turing Decoded_ is an interesting biography owing to its use of letters from members of the Turing family, including Alan Turing himself.
### The Turing Digital Archive
Nothing beats archival materials for really learning about a subject. Biographers have done masterful jobs at turning primary sources about Alan Turings life into compelling biographies, but reading Turings own writings and exploring other material in [The Turing Digital Archive][13]—maintained by Kings College, Cambridge—provides a more intimate look at Turings life and works. This archive contains Turings scholar papers, personal correspondence, photographs, and more. The collection is well-organized and the site is easy to use, making it simple for anyone to conduct their own archival research about the life of Alan Turing.
### Turings Cathedral
![Turings Cathedral][14]
In [_Turings Cathedral_][15], George Dyson explores the efforts by John von Neumann and his collaborators to construct a computer based on Alan Turings theory of a Universal Machine. John von Neumann made many, many contributions to computer science, which are also covered in this book, but the transition of Alan Turings Universal Machine from theory to practice is the facet that concerns readers wishing to learn more about Alan Turings legacy.
_Turings Cathedral_ is the story of von Neumann constructing one of the earliest modern computers, but it is, like all modern computing, the story of Alan Turings influence on everything that developed from his theories.
### Turings Vision: The Birth of Computer Science
![Turings Vision: The Birth of Computer Science][16]
[_Turings Vision: The Birth of Computer Science_][17], like its title states, explores the birth of the field of computer science. Full of diagrams and complex examples, this book might not be for everyone, but it does a masterful job of explaining computer science concepts and Turings place in the birth of the discipline. Chris Bernhardt does an excellent job of weaving together the biographical aspects with the technical, but the technical material can be very, very technical. There are mathematical proofs and other things that make this book a poor choice for the non-technical reader, but an excellent choice for someone with a background in computer science.
For a very technical book, it is an enjoyable read. The biographical aspects are not as broad or as deep as pure biographies, but it is the synthesis of the biographical and the technical that make this book so interesting.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/who-was-alan-turing
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fire_fist_break_glass_smash_fail.jpg?itok=S6hQNLtB (Fire fist breaking glass)
[2]: https://www.bankofengland.co.uk/news/2019/july/50-pound-banknote-character-announcement
[3]: https://www.bletchleypark.org.uk/
[4]: https://www.imdb.com/title/tt2084970/
[5]: https://www.telegraph.co.uk/news/politics/gordon-brown/6170112/Gordon-Brown-Im-proud-to-say-sorry-to-a-real-war-hero.html
[6]: https://www.bbc.com/news/technology-25495315
[7]: https://opensource.com/sites/default/files/uploads/alan_turing-_the_enigma_125.jpeg (Alan Turing: The Enigma)
[8]: https://press.princeton.edu/titles/10413.html
[9]: https://opensource.com/sites/default/files/uploads/the_imitation_game-_alan_turing_decoded_125.jpg (The Imitation Game: Alan Turing Decoded)
[10]: https://www.abramsbooks.com/product/imitation-game_9781419718939/
[11]: https://opensource.com/sites/default/files/uploads/prof-_alan_turing_decoded_125.jpg (Prof: Alan Turing Decoded)
[12]: https://dermotturing.com/my-recent-books/alan-turing/
[13]: http://www.turingarchive.org/
[14]: https://opensource.com/sites/default/files/uploads/turing_s_cathedral_125.jpg (Turings Cathedral)
[15]: https://www.penguinrandomhouse.com/books/44425/turings-cathedral-by-george-dyson/9781400075997/
[16]: https://opensource.com/sites/default/files/uploads/turing_s_vision-_the_birth_of_computer_science_125.jpg (Turings Vision: The Birth of Computer Science)
[17]: https://mitpress.mit.edu/books/turings-vision

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (hello-wn)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,122 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing credentials with KeePassXC)
[#]: via: (https://fedoramagazine.org/managing-credentials-with-keepassxc/)
[#]: author: (Marco Sarti https://fedoramagazine.org/author/msarti/)
Managing credentials with KeePassXC
======
![][1]
A [previous article][2] discussed password management tools that use server-side technology. These tools are very interesting and suitable for a cloud installation.
In this article we will talk about KeePassXC, a simple multi-platform open source software that uses a local file as a database.
The main advantage of this type of password management is simplicity. No server-side technology expertise is required and can therefore be used by any type of user.
### Introducing KeePassXC
KeePassXC is an open source cross platform password manager: its development started as a fork of KeePassX, a good product but with a not very active development. It saves the secrets in an encrypted database with AES algorithm using 256 bit key, this makes it reasonably safe to save the database in a cloud drive storage such as pCloud or Dropbox.
In addition to the passwords, KeePassXC allows you to save various information and attachments in the encrypted wallet. It also has a valid password generator that helps the user to correctly manage his credentials.
### Installation
The program is available both in the standard Fedora repository and in the Flathub repository. Unfortunately the integration with the browser does not work with the application running in the sandbox, so I suggest to install the program via dnf:
```
```
sudo dnf install keepassxc
```
```
### Creating your wallet
To create a new database there are two important steps:
* Choose the encryption settings: the default settings are reasonably safe, increasing the transform rounds also increases the decryption time.
* Choose the master key and additional protections: the master key must be easy to remember (if you lose it your wallet is lost!) but strong enough, a passphrase with at least 4 random words can be a good choice. As additional protection you can choose a key file (remember: you must always have it available otherwise you cannot open the wallet) and / or a YubiKey hardware key.
![][3]
![][4]
The database file will be saved to the file system. If you want to share with other computers / devices you can save it on a USB key or in a cloud storage like pCloud or Dropbox. Of course, if you choose a cloud storage, a particularly strong master password is recommended, better if accompanied by additional protection.
### Creating your first entry
Once the database has been created, you can start creating your first entry. For a web login specify a username, password and url in the Entry tab. Optionally you can specify an expiration date for the credentials based on your personal policy: also by pressing the button on the right the favicon of the site is downloaded and associated as an icon of the entry, this is a nice feature.
![][5]
![][6]
KeePassXC also offers a good password / passphrase generator, you can choose length and complexity and check the degree of resistance to a brute force attack:
![][7]
### Browser integration
KeePassXC has an extension available for all major browsers. The extension allows you to fill in the login information for all the entries whose URL is specified.
Browser integration must be enabled on KeePassXC (Tools menu -&gt; Settings) specifying which browsers you intend to use:
![][8]
Once the extension is installed, it is necessary to create a connection with the database. To do this, press the extension button and then the Connect button: if the database is open and unlocked the extension will create an association key and save it in the database, the key is unique to the browser so I suggest naming it appropriately :
![][9]
When you reach the login page specified in the Url field and the database is unlocked, the extension will offer you all the credentials you have associated with that page:
![][10]
In this way, browsing with KeePassXC running you will have your internet credentials available without necessarily saving them in the browser.
### SSH agent integration
Another interesting feature of KeePassXC is the integration with SSH. If you have ssh-agent running KeePassXC is able to interact and add the ssh keys that you have uploaded as attachments to your entries.
First of all in the general settings (Tools menu -&gt; Settings) you have to enable the ssh agent and restart the program:
![][11]
At this point it is required to upload your ssh key pair as an attachment to your entry. Then in the “SSH agent” tab select the private key in the attachment drop-down list, the public key will be populated automatically. Dont forget to select the two checkboxes above to allow the key to be added to the agent when the database is opened / unlocked and removed when the database is closed / locked:
![][12]
Now with the database open and unlocked you can log in ssh using the keys saved in your wallet.
The only limitation is in the maximum number of keys that can be added to the agent: ssh servers do not accept by default more than 5 login attempts, for security reasons it is not recommended to increase this value.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/managing-credentials-with-keepassxc/
作者:[Marco Sarti][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/msarti/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/keepassxc-816x345.png
[2]: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-07-33-27.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-07-48-21.png
[5]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-30-07.png
[6]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-43-11.png
[7]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-49-22.png
[8]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-48-09.png
[9]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-05-57.png
[10]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-13-29.png
[11]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-47-21.png
[12]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-46-35.png

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Linux kernel: Top 5 innovations)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
The Linux kernel: Top 5 innovations
======
Want to know what the actual (not buzzword) innovations are when it
comes to the Linux kernel? Read on.
![Penguin with green background][1]
The word _innovation_ gets bandied about in the tech industry almost as much as _revolution_, so it can be difficult to differentiate hyperbole from something thats actually exciting. The Linux kernel has been called innovative, but then again its also been called the biggest hack in modern computing, a monolith in a micro world.
Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and its introduced some real game-changers over its nearly 30-year life span.
### Cgroups (2.6.24)
Back in 2007, Paul Menage and Rohit Seth got the esoteric [_control groups_ (cgroups)][2] feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks.
For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left.
What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers][3]. It was no great surprise when they were adopted by projects like [LXC][4], [CoreOS][5], and Docker.
The floodgates being opened, the term _containers_ justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, its hard to get away from cgroups, theyre so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever.
For example, you might already have installed [Flathub][6] or [Flatpak][7] on your computer, or maybe youve started using [Kubernetes][8] and/or [OpenShift][9] at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers][10].
### LKMM (4.17)
In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing _litmus tests_ (**klitmus**, specifically) for testing.
As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then theres an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading.
Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur _before_ the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six.
LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints.
### Low-latency patch (2.6.38)
Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux][11], you had to obtain a low-latency kernel. This mostly applied to [audio recording][12] while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio][13], that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist.
However, if you werent using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually.
And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith.
For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)][14] and found that it worked surprisingly well.
### RCU (2.5)
RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the datas consumers read the latest version. Effectively, this means that reads happen concurrently with updates.
The typical RCU cycle is a little like this:
1. Remove pointers to data to prevent other readers from referencing it.
2. Wait for readers to complete their critical processes.
3. Reclaim the memory space.
Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion).
While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology.
### Collaboration (0.01)
The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation.
And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably _because_ of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software][15], inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans.
### Whats your favorite innovation?
This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. Ive surely left your favorite kernel innovation off the list. Tell me about it in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/Cgroups
[3]: https://lkml.org/lkml/2006/10/20/251
[4]: https://linuxcontainers.org
[5]: https://coreos.com/
[6]: http://flathub.org
[7]: http://flatpak.org
[8]: http://kubernetes.io
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[11]: http://slackermedia.info
[12]: https://opensource.com/article/17/6/qtractor-audio
[13]: http://ubuntustudio.org
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
[15]: http://fsf.org

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The lifecycle of Linux kernel testing)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-testing)
[#]: author: (Major Hayden https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden)
The lifecycle of Linux kernel testing
======
The Continuous Kernel Integration (CKI) project aims to prevent bugs
from entering the Linux kernel.
![arrows cycle symbol for failing faster][1]
In _[Continuous integration testing for the Linux kernel][2]_, I wrote about the [Continuous Kernel Integration][3] (CKI) project and its mission to change how kernel developers and maintainers work. This article is a deep dive into some of the more technical aspects of the project and how all the pieces fit together.
### It all starts with a change
Every exciting feature, improvement, and bug in the kernel starts with a change proposed by a developer. These changes appear on myriad mailing lists for different kernel repositories. Some repositories focus on certain subsystems in the kernel, such as storage or networking, while others focus on broad aspects of the kernel. The CKI project springs into action when developers propose a change, or patchset, to the kernel or when a maintainer makes changes in the repository itself.
The CKI project maintains triggers that monitor these patchsets and take action. Software projects such as [Patchwork][4] make this process much easier by collating multi-patch contributions into a single patch series. This series travels as a unit through the CKI system and allows for publishing a single report on the series.
Other triggers watch the repository for changes. This occurs when kernel maintainers merge patchsets, revert patches, or create new tags. Testing these critical changes ensures that developers always have a solid baseline to use as a foundation for writing new patches.
All of these changes make their way into a GitLab pipeline and pass through multiple stages and multiple systems.
### Prepare the build
Everything starts with getting the source ready for compile time. This requires cloning the repository, applying the patchset proposed by the developer, and generating a kernel config file. These config files have thousands of options that turn features on or off, and config files differ incredibly between different system architectures. For example, a fairly standard x86_64 system may have a ton of options available in its config file, but an s390x system (IBM zSeries mainframes) likely has much fewer options. Some options might make sense on that mainframe but they have no purpose on a consumer laptop.
The kernel moves forward and transforms into a source artifact. The artifact contains the entire repository, with patches applied, and all kernel configuration files required for compiling. Upstream kernels move on as a tarball, while Red Hat kernels become a source RPM for the next step.
### Piles of compiles
Compiling the kernel turns the source code into something that a computer can boot up and use. The config file describes what to build, scripts in the kernel describe how to build it, and tools on the system (like GCC and glibc) do the building. This process takes a while to complete, but the CKI project needs it done quickly for four architectures: aarch64 (64-bit ARM), ppc64le (POWER), s390x (IBM zSeries), and x86_64. It's important that we compile kernels quickly so that we keep our backlog manageable and developers receive prompt feedback.
Adding more CPUs provides plenty of speed improvements, but every system has its limits. The CKI project compiles kernels within containers in an OpenShift deployment; although OpenShift allows for tons of scalability, the deployment still has a finite number of CPUs available. The CKI team allocates 20 virtual CPUs for compiling each kernel. With four architectures involved, this balloons to 80 CPUs!
Another speed increase comes from a tool called [ccache][5]. Kernel development moves quickly, but a large amount of the kernel remains unchanged even between multiple releases. The ccache tool caches the built objects (small pieces of the overall kernel) during the compile on a disk. When another kernel compile comes along later, ccache looks for unchanged pieces of the kernel that it saw before. Ccache pulls the cached object from the disk and reuses it. This allows for faster compiles and lower overall CPU usage. Kernels that took 20 minutes to compile now race to the finish line in less than a few minutes.
### Testing time
The kernel moves onto its last step: testing on real hardware. Each kernel boots up on its native architecture using Beaker, and myriad tests begin poking it to find problems. Some tests look for simple problems, such as issues with containers or error messages on boot-up. Other tests dive deep into various kernel subsystems to find regressions in system calls, memory allocation, and threading.
Large testing frameworks, such as the [Linux Test Project][6] (LTP), contain tons of tests that look for troublesome regressions in the kernel. Some of these regressions could roll back critical security fixes, and there are tests to ensure those improvements remain in the kernel.
One critical step remains when tests finish: reporting. Kernel developers and maintainers need a concise report that tells them exactly what worked, what did not work, and how to get more information. Each CKI report contains details about the source code used, the compile parameters, and the testing output. That information helps developers know where to begin looking to fix an issue. Also, it helps maintainers know when a patchset needs to be held for another look before a bug makes its way into their kernel repository.
### Summary
The CKI project team strives to prevent bugs from entering the Linux kernel by providing timely, automated feedback to kernel developers and maintainers. This work makes their job easier by finding the low-hanging fruit that leads to kernel bugs, security issues, and performance problems.
* * *
_To learn more, you can attend the [CKI Hackfest][7] on September 12-13 following the [Linux Plumbers Conference][8] September 9-11 in Lisbon, Portugal._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-testing
作者:[Major Hayden][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://opensource.com/article/19/6/continuous-kernel-integration-linux
[3]: https://cki-project.org/
[4]: https://github.com/getpatchwork/patchwork
[5]: https://ccache.dev/
[6]: https://linux-test-project.github.io
[7]: https://cki-project.org/posts/hackfest-agenda/
[8]: https://www.linuxplumbersconf.org/

View File

@ -0,0 +1,225 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to compile a Linux kernel in the 21st century)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/greg-p)
How to compile a Linux kernel in the 21st century
======
You don't have to compile the Linux kernel but you can with this quick
tutorial.
![and old computer and a new computer, representing migration to new software or hardware][1]
In computing, a kernel is the low-level software that handles communication with hardware and general system coordination. Aside from some initial firmware built into your computer's motherboard, when you start your computer, the kernel is what provides awareness that it has a hard drive and a screen and a keyboard and a network card. It's also the kernel's job to ensure equal time (more or less) is given to each component so that your graphics and audio and filesystem and network all run smoothly, even though they're running concurrently.
The quest for hardware support, however, is ongoing, because the more hardware that gets released, the more stuff a kernel must adopt into its code to make the hardware work as expected. It's difficult to get accurate numbers, but the Linux kernel is certainly among the top kernels for hardware compatibility. Linux operates innumerable computers and mobile phones, embedded system on a chip (SoC) boards for hobbyist and industrial uses, RAID cards, sewing machines, and much more.
Back in the 20th century (and even in the early years of the 21st), it was not unreasonable for a Linux user to expect that when they purchased a very new piece of hardware, they would need to download the very latest kernel source code, compile it, and install it so that they could get support for the device. Lately, though, you'd be hard-pressed to find a Linux user who compiles their own kernel except for fun or profit by way of highly specialized custom hardware. It generally isn't required these days to compile the Linux kernel yourself.
Here are the reasons why, plus a quick tutorial on how to compile a kernel when you need to.
### Update your existing kernel
Whether you've got a brand new laptop featuring a fancy new graphics card or WiFi chipset or you've just brought home a new printer, your operating system (called either GNU+Linux or just Linux, which is also the name of the kernel) needs a driver to open communication channels to that new component (graphics card, WiFi chip, printer, or whatever). It can be deceptive, sometimes, when you plug in a new device and your computer _appears_ to acknowledge it. But don't let that fool you. Sometimes that _is_ all you need, but other times your OS is just using generic protocols to probe a device that's attached.
For instance, your computer may be able to identify your new network printer, but sometimes that's only because the network card in the printer is programmed to identify itself to a network so it can gain a DHCP address. It doesn't necessarily mean that your computer knows what instructions to send to the printer to produce a page of printed text. In fact, you might argue that the computer doesn't even really "know" that the device is a printer; it may only display that there's a device on the network at a specific address and the device identifies itself with the series of characters _p-r-i-n-t-e-r_. The conventions of human language are meaningless to a computer; what it needs is a driver.
Kernel developers, hardware manufacturers, support technicians, and hobbyists all know that new hardware is constantly being released. Many of them contribute drivers, submitted straight to the kernel development team for inclusion in Linux. For example, Nvidia graphic card drivers are often written into the [Nouveau][2] kernel module and, because Nvidia cards are common, the code is usually included in any kernel distributed for general use (such as the kernel you get when you download [Fedora][3] or [Ubuntu][4]. Where Nvidia is less common, for instance in embedded systems, the Nouveau module is usually excluded. Similar modules exist for many other devices: printers benefit from [Foomatic][5] and [CUPS][6], wireless cards have [b43, ath9k, wl][7] modules, and so on.
Distributions tend to include as much as they reasonably can in their Linux kernel builds because they want you to be able to attach a device and start using it immediately, with no driver installation required. For the most part, that's what happens, especially now that many device vendors are now funding Linux driver development for the hardware they sell and submitting those drivers directly to the kernel team for general distribution.
Sometimes, however, you're running a kernel you installed six months ago with an exciting new device that just hit the stores a week ago. In that case, your kernel may not have a driver for that device. The good news is that very often, a driver for that device may exist in a very recent edition of the kernel, meaning that all you have to do is update what you're running.
Generally, this is done through a package manager. For instance, on RHEL, CentOS, and Fedora:
```
`$ sudo dnf update kernel`
```
On Debian and Ubuntu, first get your current kernel version:
```
$ uname -r
4.4.186
```
Search for newer versions:
```
$ sudo apt update
$ sudo apt search linux-image
```
Install the latest version you find. In this example, the latest available is 5.2.4:
```
`$ sudo apt install linux-image-5.2.4`
```
After a kernel upgrade, you must [reboot][8] (unless you're using kpatch or kgraft). Then, if the device driver you need is in the latest kernel, your hardware will work as expected.
### Install a kernel module
Sometimes a distribution doesn't expect that its users often use a device (or at least not enough that the device driver needs to be in the Linux kernel). Linux takes a modular approach to drivers, so distributions can ship separate driver packages that can be loaded by the kernel even though the driver isn't compiled into the kernel itself. This is useful, although it can get complicated when a driver isn't included in a kernel but is needed during boot, or when the kernel gets updated out from under the modular driver. The first problem is solved with an **initrd** (initial RAM disk) and is out of scope for this article, and the second is solved by a system called **kmod**.
The kmod system ensures that when a kernel is updated, all modular drivers installed alongside it are also updated. If you install a driver manually, you miss out on the automation that kmod provides, so you should opt for a kmod package whenever it is available. For instance, while Nvidia drivers are built into the kernel as the Nouveau driver, the official Nvidia drivers are distributed only by Nvidia. You can install Nvidia-branded drivers manually by going to the website, downloading the **.run** file, and running the shell script it provides, but you must repeat that same process after you install a new kernel, because nothing tells your package manager that you manually installed a kernel driver. Because Nvidia drives your graphics, updating the Nvidia driver manually usually means you have to perform the update from a terminal, because you have no graphics without a functional graphics driver.
![Nvidia configuration application][9]
However, if you install the Nvidia drivers as a kmod package, updating your kernel also updates your Nvidia driver. On Fedora and related:
```
`$ sudo dnf install kmod-nvidia`
```
On Debian and related:
```
$ sudo apt update
$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver
```
This is only an example, but if you're installing Nvidia drivers in real life, you must also blacklist the Nouveau driver. See your distribution's documentation for the best steps.
### Download and install a driver
Not everything is included in the kernel, and not everything _else_ is available as a kernel module. In some cases, you have to download a special driver written and bundled by the hardware vendor, and other times, you have the driver but not the frontend to configure driver options.
Two common examples are HP printers and [Wacom][10] illustration tablets. If you get an HP printer, you probably have generic drivers that can communicate with your printer. You might even be able to print. But the generic driver may not be able to provide specialized options specific to your model, such as double-sided printing, collation, paper tray choices, and so on. [HPLIP][11] (the HP Linux Imaging and Printing system) provides options to manage jobs, adjust printing options, select paper trays where applicable, and so on.
HPLIP is usually bundled in package managers; just search for "hplip."
![HPLIP in action][12]
Similarly, drivers for Wacom tablets, the leading illustration tablet for digital artists, are usually included in your kernel, but options to fine-tune settings, such as pressure sensitivity and button functionality, are only accessible through the graphical control panel included by default with GNOME but installable as the extra package **kde-config-tablet** on KDE.
There are likely some edge cases that don't have drivers in the kernel but offer kmod versions of driver modules as an RPM or DEB file that you can download and install through your package manager.
### Patching and compiling your own kernel
Even in the futuristic utopia that is the 21st century, there are vendors that don't understand open source enough to provide installable drivers. Sometimes, such companies provide source code for a driver but expect you to download the code, patch a kernel, compile, and install manually.
This kind of distribution model has the same disadvantages as installing packaged drivers outside of the kmod system: an update to your kernel breaks the driver because it must be re-integrated into your kernel manually each time the kernel is swapped out for a new one.
This has become rare, happily, because the Linux kernel team has done an excellent job of pleading loudly for companies to communicate with them, and because companies are finally accepting that open source isn't going away any time soon. But there are still novelty or hyper-specialized devices out there that provide only kernel patches.
Officially, there are distribution-specific preferences for how you should compile a kernel to keep your package manager involved in upgrading such a vital part of your system. There are too many package managers to cover each; as an example, here is what happens behind the scenes when you use tools like **rpmdev** on Fedora or **build-essential** and **devscripts** on Debian.
First, as usual, find out which kernel version you're running:
```
`$ uname -r`
```
In most cases, it's safe to upgrade your kernel if you haven't already. After all, it's possible that your problem will be solved in the latest release. If you tried that and it didn't work, then you should download the source code of the kernel you are running. Most distributions provide a special command for that, but to do it manually, you can find the source code on [kernel.org][13].
You also must download whatever patch you need for your kernel. Sometimes, these patches are specific to the kernel release, so choose carefully.
It's traditional, or at least it was back when people regularly compiled their own kernels, to place the source code and patches in **/usr/src/linux**.
Unarchive the kernel source and the patch files as needed:
```
$ cd /usr/src/linux
$ bzip2 --decompress linux-5.2.4.tar.bz2
$ cd  linux-5.2.4
$ bzip2 -d ../patch*bz2
```
The patch file may have instructions on how to do the patch, but often they're designed to be executed from the top level of your tree:
```
`$ patch -p1 < patch*example.patch`
```
Once the kernel code is patched, you can use your old configuration to prepare the patched kernel config:
```
`$ make oldconfig`
```
The **make oldconfig** command serves two purposes: it inherits your current kernel's configuration, and it allows you to configure new options introduced by the patch.
You may need to run the **make menuconfig** command, which launches an ncurses-based, menu-driven list of possible options for your new kernel. The menu can be overwhelming, but since it starts with your old config as a foundation, you can look through the menu and disable modules for hardware that you know you do not have and do not anticipate needing. Alternately, if you know that you have some piece of hardware and see it is not included in your current configuration, you may choose to build it, either as a module or directly into the kernel. In theory, this isn't necessary because presumably, your current kernel was treating you well but for the missing patch, and probably the patch you applied has activated all the necessary options required by whatever device prompted you to patch your kernel in the first place.
Next, compile the kernel and its modules:
```
$ make bzImage
$ make modules
```
This leaves you with a file named **vmlinuz**, which is a compressed version of your bootable kernel. Save your old version and place the new one in your **/boot** directory:
```
$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch
$ sudo cat arch/x86_64/boot/bzImage &gt; /boot/vmlinuz
$ sudo mv /boot/System.map /boot/System.map.stock
$ sudo cp System.map /boot/System.map
```
So far, you've patched and built a kernel and its modules, you've installed the kernel, but you haven't installed any modules. That's the final build step:
```
`$ sudo make modules_install`
```
The new kernel is in place, and its modules are installed.
The final step is to update your bootloader so that the part of your computer that loads before the kernel knows where to find Linux. The GRUB bootloader makes this process relatively simple:
```
`$ sudo grub2-mkconfig`
```
### Real-world compiling
Of course, nobody runs those manual commands now. Instead, refer to your distribution for instructions on modifying a kernel using the developer toolset that your distribution's maintainers use. This toolset will probably create a new installable package with all the patches incorporated, alert the package manager of the upgrade, and update your bootloader for you.
### Kernels
Operating systems and kernels are mysterious things, but it doesn't take much to understand what components they're built upon. The next time you get a piece of tech that appears to not work on Linux, take a deep breath, investigate driver availability, and go with the path of least resistance. Linux is easier than ever—and that includes the kernel.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-21st-century
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://nouveau.freedesktop.org/wiki/
[3]: http://fedoraproject.org
[4]: http://ubuntu.com
[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic
[6]: https://www.cups.org/
[7]: https://wireless.wiki.kernel.org/en/users/drivers
[8]: https://opensource.com/article/19/7/reboot-linux
[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application)
[10]: https://linuxwacom.github.io
[11]: https://developers.hp.com/hp-linux-imaging-and-printing
[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action)
[13]: https://www.kernel.org/

View File

@ -0,0 +1,60 @@
怎样通过示弱增强领导力
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
传统观念中的领导者总是强壮、大胆、果决的。我也确实见过一些拥有这些特点的领导者。但更多时候,领导者也许看起来比传统印象中的领导者要更脆弱些,他们内心有很多这样的疑问:我的决策正确吗?我真的适合这个职位吗?我有没有在做最该做的事情?
解决这些问题的方法是把问题说出来。把问题憋在心里只会助长它们,一名开明的领导者更倾向于把自己的脆弱之处暴露出来,这样我们才能从有过相同经验的人那里得到慰藉。
为了证明这个观点,我来讲一个故事。
### 一个扰人的想法
假如你在教育领域工作,你会发现发现大家更倾向于创造[一个包容性的环境][1]——一个鼓励多样性繁荣发展的环境。长话短说,我一直以来都认为自己是出于营造包容性环境的考量,而进行的“多样性雇佣”,意思就是人事雇佣我看重的是我的性别而非能力,这个想法一直困扰着我。随之而来的开始自我怀疑:我真的是这个岗位的最佳人选吗?还是只是因为我是个女人?许多年来,我都认为公司雇佣我是因为我的能力最好。但如今却发现,对那些雇主们来说,与我的能力相比,他们似乎更关注我的性别。
我开解自己道,我到底是因为什么被雇佣并不重要,我知道我是这个职位的最佳人选而且我会用实际行动去证明。我工作很努力,达到过预期,也犯过错,也收获了很多,我做了一个老板想要自己雇员做的一切事情。
但那个“多样性雇佣”问题的阴影并未因此散去。我无法摆脱它,甚至回避一切与之相关的话题如蛇蝎,最终意识到自己拒绝谈论它意味着我能做的只有直面它。如果我继续回避这个问题,早晚会影响到我的工作,这是我最不希望看到的。
### 倾诉心中的困扰
直接谈论多样性和包容性这个话题有点尴尬,在进行自我剖析之前有几个问题需要考虑:
* 我们能够相信我们的同事,能够在他们面前表露脆弱吗?
* 一个团队的领导者在同事面前表露脆弱合适吗?
* 如果我玩脱了呢?会不会影响我的工作?
于是我和一位主管在午餐时间进行了一场小型的 Q&A 会议,这位主管负责着集团很多领域,并且以正直坦率著称。一位女同事问他,“我是因为多样性才被招进来的吗?”,他停下手头工作花了很长时间和一屋子女性员工解释了这件事,我不想复述他讲话的全部内容,我只说对我触动最大的几句:如果你知道自己能够胜任这个职位,并且面试很顺利,那么不必质疑招聘的结果。每个怀疑自己是因为多样性雇佣进公司的人私下都有自己的问题,你不必重蹈他们的覆辙。
完毕。
我很希望我能由衷地说我放下这个问题了,但事实上我没有。这问题挥之不去:万一我就是被破格录取的那个呢?万一我就是多样性雇佣的那个呢?我认识到我不能避免地反复思考这些问题。
几周后我和这位主管进行了一次一对一谈话,在谈话的末尾,我提到作为一位女性,自己很欣赏他那番对于多样性和包容性的坦率发言。当得知领导很有交流的意愿时,谈论这种话题变得轻松许多。我也向他提出了最初的问题,“我是因为多样性才被雇佣的吗?”,他回答得很干脆:“我们谈论过这个问题。”谈话后我意识到,我急切地想找人谈论这些需要勇气的问起,其实只是因为我需要有一个人的关心、倾听和好言劝说。
但正因为我有展露脆弱的勇气——去和那位主管谈论我的问题——我承受我的秘密困扰的能力提高了。我觉得身轻如燕,我开始组织各种对话,主要围绕着内隐偏见及其引起的一系列问题、怎样增加自身的包容性,和多样性的表现等。通过这些经历,我发现每个人对于多样性都有不同的认识,如果我只囿于自己的秘密,我不会有机会组织参与这些精彩的对话。
我有谈论内心脆弱的勇气,我希望你也有。
我们可以谈谈那些影响我们领导力的秘密,这样从任何意义上来说,我们距离成为一位开明的领导就近了一些。那么适当示弱有帮助你成为更好的领导者吗?
### 作者简介
Angela Robertson ; Angela Robertson 是微软的一名高管。她和她的团队对社群互助有着极大热情并参与开源工作。在加入微软之前Angela 就职于红帽公司。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
作者:[Angela Robertson][a]
译者:[Valoniakim](https://github.com/Valoniakim)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/arobertson98
[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity

View File

@ -0,0 +1,162 @@
[#]: collector: (lujun9972)
[#]: translator: (lujun9972)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
[#]: author: (redhat https://www.redhat.com)
代码英雄:第一季:操作系统大战(第二部分 Linux 崛起)
======
Saron Yitbarek: 这玩意开着的吗?让我们进一段史诗般的星球大战的开幕吧,开始了。
配音:[00:00:30] 第二集 :Linux® 的崛起。微软帝国控制着 90% 的桌面用户。操作系统的全面标准化似乎是板上钉钉的事了。然而,互联网的出现将战争的焦点从桌面转向了企业,在该领域,所有商业组织都争相构建自己的服务器。与此同时,一个不太可能的英雄出现在开源反叛组织中。固执,戴着眼镜的 Linus Torvalds 免费发布了他的 Linux 系统。微软打了个趔趄-并且开始重组。
Saron Yitbarek: [00:01:00] 哦,我们书呆子就是喜欢那样。上一次我们讲到哪了?苹果和微软互相攻伐,试图在一场争夺桌面用户的战争中占据主导地位。在第一集的结尾,我们看到微软获得了大部分的市场份额。很快,由于互联网的兴起以及随之而来的开发者大军,整个市场都经历了一场地震。互联网将战场从在家庭和办公室中的个人电脑用户转移到拥有数百台服务器的大型商业客户中。
[00:01:30] 这意味着巨量资源的迁移。突然间,所有相关企业不仅被迫为服务器空间和网站建设付费,而且还必须集成软件来进行资源跟踪和数据库监控等工作。你需要很多开发人员来帮助你。至少那时候大家都是这么做的。
在操作系统之战的第二部分,我们将看到优先级的巨大转变,以及像 Linus Torvalds 和 Richard Stallman 这样的开源叛逆者是如何成功地在微软和整个软件行业的核心地带引起恐惧的。
[00:02:00] 我是 Saron Yitbarek您现在收听的是代码英雄一款红帽公司原创的播客节目。每一集我们都会给您带来“从码开始”改变技术的人的故事。
[00:02:30] 好。假设你是 1991 年的微软。你自我感觉良好,对吧?满怀信心。确定全球主导的地位感觉不错。你已经掌握了与其他企业合作的艺术,但是仍然将大部分开发人员、程序员和系统管理员排除在联盟之外,而他们才是真正的步兵。这是出现了个叫 Linus Torvalds 的芬兰极客。他和他的开源程序员团队正在开始发布 Linux其操作系统内核是由他们一起编写出来的。
[00:03:00] 坦白地说,如果你是微软公司,你并不会太在意 Linux甚至是一般意义上的开源运动但是最终Linux 的规模变得如此之大以至于微软不可能不注意到。Linux 第一个版本出现在 1991 年,当时大概有 10000 行代码。十年后,变成了 300 万行代码。如果你想知道,今天则是 2000 万行代码。
[00:03:30] 让我们停留在 90 年代初一会儿。那是 Linux 还没有成为我们现在所知道的庞然大物。只是这个奇怪的病毒式的操作系统正在这个星球上蔓延,全世界的极客和黑客都爱上了它。那时候我还太年轻,但依然希望加入他们。在那个时候,发现 Linux 就如同进入了一个秘密社会一样。程序员与朋友分享 Linux CD 集,就像其他人分享地下音乐混音带一样。
Developer Tristram Oaten [00:03:40] 讲讲了你 16 岁时第一次接触 Linux 的故事吧。
Tristram Oaten: [00:04:00] 我和我的家人去了 Red Sea 上的 Hurghada 潜水度假。那是一个美丽的地方,强烈推荐。第一天,我喝了自来水。也许,我妈妈跟我说过不要这么做。我整个星期都病得很厉害,没有离开旅馆房间。当时我只带了一台新安装了 Slackware Linux 的笔记本电脑,我听说过这玩意并且正在尝试使用它。这台笔记本上没有额外的应用程序,只有 8 张 cd。出于必要整个星期我所做的就是去了解这个外星一般的系统。我阅读手册摆弄着终端。我记得当时我甚至我不知道一个点(表示当前目录)和两个点(表示前一个目录)之间的区别。
[00:04:30] 我一点头绪都没有。犯过很多错误,但慢慢地,在这种强迫的孤独中,我突破了障碍,开始理解并明白命令行到底是怎么回事。假期结束时,我没有看过金字塔、尼罗河等任何埃及遗址,但我解锁了现代世界的一个奇迹。我解锁了 Linux接下来的事大家都知道了。
Saron Yitbarek: 你可以从很多人那里听到关于这个故事的不同说法。访问 Linux 命令行是一种革命性的体验。
David Cantrell: 它给了我源代码。我当时的感觉是,"太神奇了。"
Saron Yitbarek: 我们正在参加一个名为 Flock to Fedora 的 2017 年 Linux 开发者大会。
David Cantrell: .。. 非常有吸引力。我觉得我对这个系统有了更多的控制力它越来越吸引我。我想从那时起1995 年我第一次编译 Linux 内核时,我就迷上了它。
Saron Yitbarek: 开发者 David Cantrell 与 Joe Brockmire。
Joe Brockmeier: 我寻遍了便宜软件最终找到一套四张 CD 的 Slackware Linux。它看起来来非常令人兴奋而且很有趣所以我把它带回家安装在第二台电脑上开始摆弄它并为两件事情感到兴奋。一个是我运行的不是 Windows另一个我 Linux 的开源特性。
Saron Yitbarek: [00:06:00] 某种程度上来说,对命令行的访问总是存在的。在开源真正开始流行还要早几十年前,人们(至少在开发人员中是这样)总是希望能够做到完全控制。让我们回到操作系统大战之前的那个时代,在苹果和微软他们的 GUI 而战之前。那时也有代码英雄。保罗·琼斯 (Paul Jones) 教授(在线图书馆 ibiblio.org 负责人)在那个古老的时代,就是一名开发人员。
Paul Jones: [00:07:00] 从本质上讲,互联网在那个时候比较少是客户端-服务器架构的,而是更多是点对点架构的。讲真,当我们说,某种 VAX 到 VAX某科学工作站科学工作站。这并不意味着没有客户端与服务端的关系以及没有应用程序但这的确意味着最初的设计是思考如何实现点对点它与 IBM 一直在做的东西相对立。IBM 给你的只有哑终端,这种终端只能让你管理用户界面,却无法让你像真正的终端一样为所欲为。
Saron Yitbarek: 图形用户界面在普通用户中普及的同时,在工程师和开发人员中总是存在和一股相反的力量。早在 20 世纪 70 年代和 80 年代的 Linux 出现之前,这股力量就存在于 EMAX 和 GNU 中。有了斯托尔曼的自由软件基金会后,总有某些人想要使用命令行,但上世纪 90 年代的 Linux 的交付方式是独一无二的。
[00:07:30] Linux 和其他开源软件的早期爱好者是都是先驱。我正站在他们的肩膀上。我们都是。
您现在收听的是代码英雄一款由红帽公司原创的播客。这是操作系统大战的第二部分Linux 崛起。
Steven Vaughan-Nichols: 1998 年的时候,情况发生了变化。
Saron Yitbarek: Steven Vaughan-Nichols 是 zdnet.com 的特约编辑,他已经写了几十年关于技术商业方面的文章了。他将向我们讲述 Linux 是如何慢慢变得越来越流行,直到自愿贡献者的数量远远超过了在 Windows 上工作的微软开发人员的数量的。不过Linux 从来没有真正关注过微软的台式机客户,这也许就是微软最开始时忽略了 Linux 及其开发者的原因。Linux 真正大放光彩的地方是在服务器机房。当企业开始线上业务时,每个企业都需要一个独特的编程解决方案来满足其需求。
[00:08:30] WindowsNT 于 1993 年问世,当时它已经在与其他的服务器操作系统展开竞争了,但是许多开发人员都在想,“既然我可以通过 Apache 构建出基于 Linux 的廉价系统,那我为什么要购买 AIX 设备或大型 Windows 设备呢”关键点在于Linux 代码已经开始渗透到几乎所有在线的东西中。
Steven Vaughan-Nichols: [00:09:00] 令微软感到惊讶的是它开始意识到Linux 实际上已经开始有一些商业应用,不是在桌面环境,而是在商业服务器上。因此,他们发起了一场运动,我们称之为 FUD- 恐惧、不确定和怀疑 (fearuncertainty 和 double)。他们说“哦Linux 这玩意,真的没有那么好。它不太可靠。你一点都不能相信它”。
Saron Yitbarek: [00:09:30] 这种软宣传式的攻击持续了一段时间。微软也不是唯一一个对 Linux 感到紧张的公司。这其实是整个行业在对抗这个奇怪新人的挑战。例如,任何与 UNIX 有利害关系的人都可能将 Linux 视为篡夺者。有一个案例很著名,那就是 SCO 组织(它发行过一种版本的 UNIX) 在过去 10 多年里发起一系列的诉讼,试图阻止 Linux 的传播。SCO 最终失败而且破产了。与此同时,微软一直在寻找机会。他们势在必行。只不过目前还不清楚具体要怎么做。
Steven Vaughan-Nichols: [00:10:30] 让微软真正担心的是,第二年,在 2000 年的时候IBM 宣布,他们将于 2001 年投资 10 亿美元在 Linux 上。现在IBM 已经不再涉足个人电脑业务。他们还没有走出去,但他们正朝着这个方向前进,他们将 Linux 视为服务器和大型计算机的未来在这一点上剧透警告IBM 是正确的。Linux 将主宰服务器世界。
Saron Yitbarek: 这已经不再仅仅是一群黑客喜欢命令行的 Jedi 式的控制了。金钱的投入对 Linux 助力极大。Linux 国际的执行董事 John "Mad Dog" Hall 有一个故事可以解释为什么会这样。我们通过电话与他取得了联系。
John Hall: [00:11:30] 我的一个朋友名叫 Dirk Holden[00:10:56],他是德国德意志银行的一名系统管理员,他也参与了个人电脑上早期 X Windows 系统的图形项目中的工作。有一天我去银行拜访他,我说 :“Dirk你银行里有 3000 台服务器,用的都是 Linux。为什么不用 Microsoft NT 呢?”他看着我说:“是的,我有 3000 台服务器,如果使用微软的 Windows NT 系统,我需要 2999 名系统管理员。”他继续说道:“而使用 Linux我只需要四个。”这真是完美的答案。
Saron Yitbarek: [00:12:00] 程序员们着迷的这些东西恰好对大公司也极具吸引力。但由于 FUD 的作用,一些企业对此持谨慎态度。他们听到开源,就想:"开源。这看起来不太可靠,很混乱,充满了 BUG"。但正如那位银行经理所指出的,金钱听过一种有趣的方式,说服人们克服困境。甚至那些需要网站的小公司也加入了 Linux 阵营。与一些昂贵的专有选择相比,使用一个廉价的 Linux 系统在成本上是无法比拟的。如果您是一家雇佣专业人员来构建网站的商店,那么您很定想让他们使用 Linux。
[00:12:30] 让我们快进几年。Linux 运行每个人的网站上。Linux 已经征服了服务器世界,然后智能手机也随之诞生。当然,苹果和他们的 iPhone 占据了相当大的市场份额而且微软也希望能进入这个市场但令人惊讶的是Linux 也在那,已经做好准备了,迫不及待要大展拳脚。
作家兼记者 James Allworth。
James Allworth: [00:13:00] 当然还有容纳第二个竞争者的空间,那本可以是微软,但是实际上却是 Android而 Andrid 基本上是基于 Linux 的。众所周知Android 被谷歌所收购,现在运行在世界上大部分的智能手机上,谷歌在 Linux 的基础上创建了 Android。Linux 使他们能够以零成本从一个非常复杂的操作系统开始。他们成功地实现了这一目标,最终将微软挡在了下一代设备之外,至少从操作系统的角度来看是这样。
Saron Yitbarek: [00:13:30] 天崩地裂了很大程度上微软有被埋没的风险。John Gossman 是微软 Azure 团队的首席架构师。他还记得当时困扰公司的困惑。
John Gossman: [00:14:00] 像许多公司一样,微软也非常担心知识产权污染。他们认为,如果允许开发人员使用开源代码,那么很可能只是复制并粘贴一些代码到某些产品中,就会让某种病毒式的许可证生效从而引发未知的风险…,我认为,这跟公司文化有关,很多公司,包括微软,都对开源开发的意义和商业模式之间的分歧感到困惑。有一种观点认为,开源意味着你所有的软件都是免费的,人们永远不会付钱。
Saron Yitbarek: [00:14:30] 任何投资于旧的、专有软件模型的人都会觉得这里发生的一切对他们构成了威胁。当你威胁到像微软这样的大公司时,是的,他们一定会做出反应。他们推动所有这些 FUD(fearuncertaintydoubt)- 恐惧,不确定性和怀疑是有道理的。当时,商业运作的方式基本上就是相互竞争。不过,如果他们是其他公司的话 (If they'd been any other company看不懂什么意思),他们可能还会怀恨在心,抱着旧有的想法,但到了 2013 年,一切都变了。
[00:15:00] 微软的云计算服务 Azure 上线了,令人震惊的是,它从第一天开始就提供了 Linux 虚拟机。Steve Ballmer这位把 Linux 称为癌症的首席执行官,已经离开了,代替他的是一位新的有远见的首席执行官 Satya Nadella。
John Gossman: Satya 有不同的看法。他属于另一个世代。比 PaulBill 和 Steve 更年轻的世代,他对开源有不同的看法。
Saron Yitbarek: John Gossman再说一次来自于 微软的 Azure 团队。
John Gossman: [00:16:00] 大约四年前,处于实际需要,我们在 Azure 中添加了 Linux 支持。如果访问任何一家企业客户,你都会发现他们并没有试图决定是使用 Windows 还是使用 Linux、 使用 .net 还是使用 Java TM。他们在很久以前就做出了决定——大约 15 年前才有这样的一些争论。现在,我见过的每一家公司都混合了 Linux 和 Java、Windows 和 .net、SQL Server、Oracle 和 MySQL—— 基于专有源代码的产品和开放源代码的产品。
如果你正在运维着一个云,允许这些公司在云上运行他们的业务,那么你就不能简单地告诉他们,“你可以使用这个软件,但你不能使用那个软件。”
Saron Yitbarek: [00:16:30] 这正是 Satya Nadella 采纳的哲学思想。2014 年秋季,他站在舞台上,希望传递一个重要信息。微软爱 Linux。他接着说Azure 20% 的业务量已经是 Linux 了,微软将始终对 Linux 发行版提供一流的支持。没有哪怕一丝对开源的宿怨。
为了说明这一点,在他们的背后有一个巨大的标志,上面写着 :“Microsoft hearts Linux”。哇哇哇。对我们中的一些人来说这种转变有点令人震惊但实际上无需如此震惊。下面是 Steven Levy一名科技记者兼作家。
Steven Levy: [00:17:30] 当你在踢足球的时候,如果草坪变滑了,那么你也许会换一种不同的鞋子。他们当初就是这么做的。他们不能否认现实而且他们之间也有有聪明人,所以他们必须意识到,这就是世界的运行方式,不管他们早些时候说了什么,即使他们对之前的言论感到尴尬,但是让他们之前关于开源多么可怕的言论影响到现在明智的决策那才真的是疯了。
Saron Yitbarek: [00:18:00] 微软低下了它高傲的头。你可能还记得苹果公司,经过多年的孤立无援,最终转向与微软构建合作伙伴关系。现在轮到微软进行 180 度转变了。经过多年的与开源方法的战斗后他们正在重塑自己。要么改变要么死亡。Steven Vaughan-Nichols。
Steven Vaughan-Nichols: [00:18:30] 即使是像微软这样规模的公司也无法与成千上万的开源开发者竞争,这些开发者开发这包括 Linux 在内的其他大项目。很长时间以来他们都不愿意这么做。前微软首席执行官史蒂夫·鲍尔默 SteveBallmer 对 Linux 深恶痛绝。由于它的 GPL 许可证,让 Linux 称为一种癌症,但一旦鲍尔默被扫地出门,新的微软领导层说,“这就好像试图命令潮流不要过来,但潮水依然会不断涌进来。我们应该与 Linux 合作,而不是与之对抗。”
Saron Tiebreak: [00:19:00] 真的,在线技术历史上最大的胜利之一就是微软能够做出这样的转变,当他们最终决定这么做的时候。当然,当微软出现在开源的桌子上时,老的、铁杆 Linux 支持者是相当怀疑的。他们不确定自己是否能接受这些家伙,但正如沃恩-尼科尔斯所指出的,今天的微软根本不是你父母的微软。事实上,互联网技术历史上最大的胜利之一就是让微软最终做出如此转变。当然,当微软出现在开源桌上时,老一代的、铁杆 Linux 支持者是相当怀疑的。他们不确定自己是否能接受这些家伙,但正如 Vaughan-Nichols 所指出的,今天的微软已经不是你父母那一代时的微软了。
Steven Vaughan-Nichols : [00:19:30] 2017 年的微软既不是史蒂夫•鲍尔默 (Steve Ballmer) 的微软,也不是比尔•盖茨 (Bill Gates) 的微软。这是一家完全不同的公司,有着完全不同的方法,而且,开源软件一旦被放出,就无法被收回。开源已经吞噬了整个技术世界。从未听说过 Linux 的人可能对它并不了解,但是每次他们访问 Facebook他们都在运行 Linux。每次执行谷歌搜索时你都在运行 Linux。
[00:20:00] 每次你用 Android 手机,你都在运行 Linux。它确实无处不在微软无法阻止它我认为以为微软可以以某种方式接管它的想法太天真了。
Saron Yitbarek: [00:20:30] 开源支持者可能一直担心微软会像混入羊群中的狼一样,但事实是,开源软件的本质保护了它无法被完全控制。没有一家公司能够拥有 Linux 并以某种特定的方式控制它。Greg Kroah-Hartman 是 Linux 基金会的一名成员。
Greg Kroah-Hartman: 每个公司和个人都以自私的方式为 Linux 做出贡献。他们之所以这样做是因为他们想要解决他们所面临的问题,可能是硬件无法工作,或者是他们想要添加一个新功能来做其他事情,又或者想在他们的产品中使用它。这很棒,因为他们会把代码贡献回去,此后每个人都会从中受益,这样每个人都可以用到这份代码。正是因为这种自私,所有的公司,所有的人,所有的人都能从中受益。
Saron Yitbarek: [00:21:30] 微软已经意识到,在即将到来的云战争中,与 Linux 作战就像与空气作战一样。Linux 和开源不是敌人,它们是一种氛围。今天,微软以白金会员的身份加入了 Linux 基金会。他们成为 GitHub 开源项目的头号贡献者。2017 年 9 月,他们甚至加入了 Open Source Initiative 组织。现在,微软在开放许可下发布了很多代码。微软的 John Gossman 描述了他们开源 .net 时所发生的事情。起初,他们并不认为自己能得到什么回报。
John Gossman: [00:22:00] 我们本没有指望来自社区的贡献,然而,三年后,超过 50% 的对 .net 框架库的贡献来自于微软之外。这包括大量的代码。三星为 .net 提供了 ARM 支持。Intel 和 ARM 以及其他一些芯片厂商已经为 .net 框架贡献了特定于他们处理器的代码生成,以及数量惊人的修复、性能改进等等——既有单个贡献者也有社区。
Saron Yitbarek: 直到几年前,我们今天拥有的这个微软,这个开放的微软,还是不可想象的。
[00:23:00] 我是 Saron Yitbarek这里是代码英雄。好吧我们已经看到了为了赢得数百万桌面用户的爱而战的激烈场面。我们已经看到开源软件在私有巨头的背后悄然崛起并攫取了巨大的市场份额。我们已经看到了一批批的代码英雄将编程领域变成了我你现在看到的这个样子。今天大企业正在吸收开源软件通过这一切每个人都从他人那里受益。
[00:23:30] 在狂野的西方科技界一贯如此。苹果受到施乐的启发微软受到苹果的启发Linux 受到 UNIX 的启发。进化,借鉴,不断成长。如果比喻成大卫和歌利亚(西方经典的以弱胜强战争中的两个主角)的话,开源软件不再是大卫,但是,你知道吗?它也不是歌利亚。开源已经超越了传统。它已经成为其他人战斗的战场。随着开源变得不可避免,新的战争,那些在云计算中进行的战争,那些在开源战场上进行的战争,都在增加。
这是 Steven Levy他是一名作者。
Steven Levy: [00:24:00] 基本上,到目前为止,包括微软在内,我们有四到五家公司,正以各种方式努力把自己打造成为我们的工作平台,比如人工智能领域。你能看到智能助手之间的战争,你猜怎么着?苹果有一个智能助手,叫 Siri。微软有一个叫 Cortana。谷歌有谷歌助手。三星也有一个智能助手。亚马逊也有一个叫 Alexa。我们看到这些战斗遍布各地。也许你可以说最热门的人工智能平台将控制我们生活中所有的东西而这五家公司就是在为此而争斗。
Saron Yitbarek: 现在很难再找到另一个反叛者,它就像 Linux 奇袭微软那样,偷偷潜入 Facebook、 谷歌或亚马逊,攻其不备。因为正如作家 James Allworth 所指出的,成为一个真正的反叛者者只会变得越来越难。
James Allworth: [00:25:30] 规模一直以来都是一种优势,但规模优势本质上……怎么说呢,我认为以前它们在本质上是线性的,现在它们在本质上是指数型的了,所以,一旦你开始以某种方法走在前面,另一个新玩家要想赶上来就变得越来越难了。我认为在互联网时代这大体来说来说是正确的,无论是因为规模,还是数据赋予组织的重要性和优势,就其竞争能力而言。一旦你走在前面,你就会吸引更多的客户,这就给了你更多的数据,让你能做得更好,这之后,客户还有什么理由选择排名第二的公司呢,难道是因为因为他们落后了这么远么?我认为在云的时代这个逻辑也不会有什么不同。
Saron Yitbarek: [00:26:00] 这个故事始于史蒂夫•乔布斯 (Steve Jobs) 和比尔•盖茨 (Bill Gates) 这样的非凡的英雄,但科技的进步已经呈现出一种众包、有机的感觉。我认为据说我们的开源英雄 Linus Torvalds 在第一次发明 Linux 内核时甚至没有一个真正的计划。他无疑是一位才华横溢的年轻开发者,但他也像潮汐前的一滴水一样。变革是不可避免的。据估计,对于一家私有专利公司来说,用他们老式的、专有的方式创建一个 Linux 发行版将花费他们超过 100 亿美元。这说明了开源的力量。
[00:26:30] 最后,这并不是一个专利模型所能与之竞争的东西。成功的公司必须保持开放。这是最大、最终极的教训。还有一点要记住:当我们连接在一起的时候,我们在已有基础上成长和建设的能力是无限的。不管这些公司有多大,我们都不必坐等他们给我们更好的东西。想想那些为了纯粹的创造乐趣而学习编码的新开发者,那些自己动手丰衣足食的人。
未来的优秀程序员无管来自何方,只要能够访问代码,他们就能构建下一个大项目。
[00:27:30] 以上就是我们关于操作系统战争的两个故事。这场战争塑造了我们的数字生活。争夺主导地位的斗争从桌面转移到了服务器室最终进入了云计算领域。过去的敌人难以置信地变成了盟友众包的未来让一切都变得开放。听着我知道在这段历史之旅中还有很多英雄我们没有提到所以给我们写信吧。分享你的故事。Redhat.com/commandlineheroes。我恭候佳音。
在本季剩下的时间里,我们将学习今天的英雄们在创造什么,以及他们要经历什么样的战斗才能将他们的创造变为现实。让我们从艰苦卓绝的编程一线回来看看更多的传奇故事吧。我们每两周放一集新的博客。几周后,我们将为您带来第三集:敏捷革命。
[00:28:00] 命令行英雄是一款红帽公司原创的播客。要想免费自动获得新一集的代码英雄,请订阅我们的节目。只要在苹果播客 、Spotify、 谷歌 Play或其他应用中搜索“代码英雄”。然后点击“订阅”。这样你就会第一个知道什么时候有新剧集了。
我是 Saron Yitbarek。感谢收听。继续编码。
--------------------------------------------------------------------------------
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
作者:[redhat][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.redhat.com
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,145 @@
4 种方式来自定义 Xfce 来给它一个现代化外观
======
**简介: Xfce 是一个非常轻量的桌面环境,它有一个缺点。它看起来有点老旧。但是你没有必要坚持默认外观。让我们看看你可以自定义 Xfce 的各种各样的方法,来给让一个现代化的和漂亮的外观。**
![Customize Xfce desktop envirnment][1]
首先Xfce 是最[受欢迎的桌面环境][2]之一。作为一个轻量级桌面环境,你可以在非常低的资源上运行 Xfce ,并且,它仍然很好工作。这是为什么很多[轻量级 Linux 发行版][3]默认使用 Xfce 的原因之一。
一些人甚至喜欢在高端设备上使用它,说明它的简单性、易用性和非资源匮乏性是主要原因。
[Xfce][4] 是自身很小,并只提供你需要的东西。令人烦恼的事是觉得它的外观和感觉很老了。然而,你可以简单地自定义 Xfce 来看起来现代化和漂亮,而不达到 Unity/GNOME 会话占用系统资源的极限。
### 4 种方式来自定义 Xfce 桌面
让我们看看一些方法,我们可以通过这些方法改善你的 Xfce 桌面环境的外观和感觉。
默认 Xfce 桌面环境看起来有些像这样:
![Xfce default screen][5]
如您所见,默认 Xfce 桌面有点没有趣味性。我们将使用主题,图标包以及更改默认 dock 来使它看起来新鲜和有一个惊艳。
#### 1. 在 Xfce 中更改主题
我们将做的第一件事是从 [xfce-look.org][6] 中找到一款主题。我最喜欢的 Xfce 主题是 [XFCE-D-PRO][7]。
你可以从[这里][8]下载主题,并提取到某处。
你可以复制提取出的这些主题文件到你 home 目录中的 **.theme** 文件夹。如果文件夹默认不存在,你可以创建一个 ,同样的道理,图标需要一个在 home 目录中的 **.icons** 文件夹。
打开 **设置 > 外观 > 样式** 来选择主题,注销并重新登录以查看更改。默认的 Adwaita-dark 也是极好的一个。
![Appearance Xfce][9]
你可以在 Xfce 上使用一些[好的 GTK 主题][10]。
#### 2. 在 Xfce 中更改图标
Xfce-look.org 也提供你可以下载的图标主题,提取并放置图标到你的 home 目录中 **.icons** 目录。在你添加图标主题到 .icons 目录中后,转到 **设置 > 外观 > 图标** 来选择这个图标主题。
![Moka icon theme][11]
我已经安装 [Moka 图标集][12] ,它看起来令人惊艳。
![Moka theme][13]
你也可以参考我们[令人惊艳的图标主题][14]列表。
##### **可选: 通过 Synaptic 安装主题**
如果你想避免手工搜索和复制文件,在你的系统中安装 Synaptic 软件包管理器。你可以通过网络和图标集来查找最佳的主题,使用 synaptic 软件包管理器,你可以搜索和安装主题。
```
sudo apt-get install synaptic
```
**通过 Synaptic 搜索和安装主题/图标**
打开 synaptic ,并在**搜索**上单击。输入你期望的主题,接下来,它将显示匹配主题的列表。标记使用附加依赖的更改,并在**应用**上单击。这些操作将下载主题和安装主题。
![Arc Theme][15]
在安装后,你可以打开**外观**选项来选择期望的主题。
在我看来,这不是在 Xfce 中安装主题的最佳方法。
#### 3. 在 Xfce 中更改桌面背景
在强调一次,默认 Xfce 桌面背景也不错。但是你可以把桌面背景更改成与你的图标和主题相匹配的东西。
为在 Xfce 中更改桌面背景,在桌面上右击,并单击桌面设置。你可以从你自定义收藏品中或默认收藏品中给定一个来更改桌面背景。
在桌面上右击,并单击**桌面设置**。从文件夹选择中选择**背景**,并选择任意一个默认背景或自定义背景。
![Changing desktop wallpapers][16]
#### 4. 在 Xfce 中更改 dock
默认 dock 是极好的,为此它做的非常多。但是,再强调一次,它看来有点没有趣味性。
![Docky][17]
不过,如果你想你的 dock 变得更好,并带有更多一点的自定义选项,你可以安装另一个 dock 。
Plank 是最简单和轻量以及高度可配置中一个。
为安装 Plank ,使用下面的命令:
`sudo apt-get install plank`
如果 Plank 在默认存储库中不可获得,你可以从这个 PPA 中安装它。
```
sudo add-apt-repository ppa:ricotz/docky
sudo apt-get update
sudo apt-get install plank
```
在你使用 Plank 前,你应该通过右键单击移除默认的 dockby并在面板设置下单击删除。
在完成后,转到 **附件 > Plank** 来启动 Plank dock 。
![Plank][18]
Plank 从你正在使用的图标中选取一个图标。因此,如果你更改图标主题,你也将在 dock 中看到相关的更改。
### 总结
XFCE 是一个轻量级,快速和高度可自定义的桌面环境。如果你的系统资源有限,它服务很好,并且你可以简单地自定义它来看起来更好。这是在应用这些步骤后,我的屏幕的外观。
![XFCE desktop][19]
这只是半个小时的努力,你可以使用不同的主题/图标自定义使它看起来更好。可以随意在评论区分享你自定义的 XFCE 桌面屏幕,以及你正在使用的主题和图标组合。
--------------------------------------------------------------------------------
via: https://itsfoss.com/customize-xfce/
作者:[Ambarish Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ambarish/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/xfce-customization.jpeg
[2]:https://itsfoss.com/best-linux-desktop-environments/
[3]:https://itsfoss.com/lightweight-linux-beginners/
[4]:https://xfce.org/
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/1-1-800x410.jpg
[6]:http://xfce-look.org
[7]:https://www.xfce-look.org/p/1207818/XFCE-D-PRO
[8]:https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/4.jpg
[10]:https://itsfoss.com/best-gtk-themes/
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/6.jpg
[12]:https://snwh.org/moka
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/11-800x547.jpg
[14]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/5-800x531.jpg
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/7-800x546.jpg
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/8.jpg
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/9.jpg
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/10-800x447.jpg