From 523ed70853c845ed4944ebb553dff4d0c3990b8e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Feb 2020 12:15:05 +0800 Subject: [PATCH] =?UTF-8?q?=E6=B8=85=E9=99=A4=E5=A4=AA=E4=B9=85=E8=BF=9C?= =?UTF-8?q?=E7=9A=84=E6=96=87=E7=AB=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20200117 Fedora CoreOS out of preview.md | 108 --- ...ft release open source machine learning.md | 80 -- ...the developer, and more industry trends.md | 61 -- ...ript Fatigue- Realities of our industry.md | 221 ----- .../20171030 Why I love technical debt.md | 69 -- ... How to Monetize an Open Source Project.md | 86 -- ...air writing helps improve documentation.md | 87 -- ... and How to Set an Open Source Strategy.md | 120 --- ...71116 Why is collaboration so difficult.md | 94 -- ...lved our transparency and silo problems.md | 95 -- ...and GitHub to improve its documentation.md | 116 --- ... sourcing movements can share knowledge.md | 121 --- ... the cost of structured data is reduced.md | 181 ---- ...ering- A new paradigm for cybersecurity.md | 87 -- ...eat resume that actually gets you hired.md | 395 --------- ...lopment process that puts quality first.md | 99 --- ...nframes Aren-t Going Away Any Time Soon.md | 73 -- ...ywhere Is Dead, Long Live Anarchy Linux.md | 127 --- ... even if you don-t identify as a writer.md | 149 ---- ...ser community makes for better software.md | 47 - ...an anonymity and accountability coexist.md | 79 -- ...0216 Q4OS Makes Linux Easy for Everyone.md | 140 --- ...en naming software development projects.md | 91 -- ...80221 3 warning flags of DevOps metrics.md | 42 - ...0180222 3 reasons to say -no- in DevOps.md | 105 --- ... Give Life to a Mobile Linux Experience.md | 123 --- ...ortant issue in a DevOps transformation.md | 91 -- ...301 How to hire the right DevOps talent.md | 48 - ... as team on today-s open source project.md | 53 -- ...303 4 meetup ideas- Make your data open.md | 75 -- ...How to apply systems thinking in DevOps.md | 89 -- ...ommunity will help your project succeed.md | 111 --- ...Growing an Open Source Project Too Fast.md | 40 - ...comers- A guide for advanced developers.md | 119 --- ...en Source Projects With These Platforms.md | 96 -- ...for better agile retrospective meetings.md | 66 -- ...180323 7 steps to DevOps hiring success.md | 56 -- ... Android Auto emulator for Raspberry Pi.md | 81 -- ...one should avoid with hybrid multicloud.md | 87 -- ...0180404 Is the term DevSecOps necessary.md | 51 -- ...ing -ownership- across the organization.md | 125 --- .../talk/20180410 Microservices Explained.md | 61 -- ...ent, from coordination to collaboration.md | 71 -- ...back up your people, not just your data.md | 79 -- ... develop the FOSS leaders of the future.md | 93 -- ...mpatible with part-time community teams.md | 73 -- ...pen source project-s workflow on GitHub.md | 109 --- ...Could Be Costing to More Than You Think.md | 39 - ...s a Server in Every Serverless Platform.md | 87 -- ...80511 Looking at the Lispy side of Perl.md | 357 -------- ...7 Whatever Happened to the Semantic Web.md | 106 --- ...nciples of resilience for women in tech.md | 93 -- ... AI Is Coming to Edge Computing Devices.md | 66 -- ... list for open organization enthusiasts.md | 133 --- ...d avoid with hybrid multi-cloud, part 2.md | 68 -- ...g your project and community on Twitter.md | 157 ---- ...rate to the world of Linux from Windows.md | 154 ---- ...ones can teach us about open innovation.md | 49 - ...Ren-Py for creating interactive fiction.md | 70 -- ...ce Certification Matters More Than Ever.md | 49 - ... Linux and Windows Without Dual Booting.md | 141 --- ...eveloper 9 experiences you ll encounter.md | 141 --- ... a multi-microphone hearing aid project.md | 69 -- ...Confessions of a recovering Perl hacker.md | 46 - ... Success with Open Source Certification.md | 63 -- .../talk/20180719 Finding Jobs in Software.md | 90 -- ...e Certification- Preparing for the Exam.md | 64 -- ...ur workloads to the cloud is a bad idea.md | 71 -- ...jargon- The good, the bad, and the ugly.md | 108 --- ...180802 Design thinking as a way of life.md | 95 -- ...rammer in an underrepresented community.md | 94 -- ...lding more trustful teams in four steps.md | 70 -- ...ur team to a microservices architecture.md | 180 ---- .../20180809 How do tools affect culture.md | 56 -- ...eimplement Inheritance and Polymorphism.md | 235 ----- ...t the Evolution of the Desktop Computer.md | 130 --- ...Ru makes a college education affordable.md | 60 -- ...atient data safe with open source tools.md | 51 -- ...source projects for the new school year.md | 59 -- ...80906 DevOps- The consequences of blame.md | 67 -- ...he Rise and Demise of RSS (Old Version).md | 278 ------ ...80917 How gaming turned me into a coder.md | 103 --- ...Building a Secure Ecosystem for Node.js.md | 51 -- ...ubleshooting Node.js Issues with llnode.md | 75 -- ...1003 13 tools to measure DevOps success.md | 84 -- ...sier to Get a Payrise by Switching Jobs.md | 99 --- ...es for giving open source code feedback.md | 47 - ...sational interface design and usability.md | 105 --- ... your organization-s security expertise.md | 147 --- ...reasons not to write in-house ops tools.md | 64 -- ...pen source classifiers in AI algorithms.md | 111 --- ... BeOS or not to BeOS, that is the Haiku.md | 151 ---- ...out leveling up a heroic developer team.md | 213 ----- ...tips for facilitators of agile meetings.md | 60 -- ...open source hardware increases security.md | 84 -- ...ntinuous testing wrong - Opensource.com.md | 184 ---- ...rce in education creates new developers.md | 65 -- ...derstanding a -nix Shell by Writing One.md | 412 --------- ...seen these personalities in open source.md | 93 -- .../20181114 Analyzing the DNA of DevOps.md | 158 ---- ...open source- 9 tips for getting started.md | 76 -- ... Closer Look at Voice-Assisted Speakers.md | 125 --- ...t the open source community means to me.md | 94 -- ...9 top tech-recruiting mistakes to avoid.md | 108 --- ...back is important to the DevOps culture.md | 68 -- ... emerging tipping points in open source.md | 93 -- ... reasons to give Linux for the holidays.md | 78 -- ...in Linux Kernel Code Replaced with -Hug.md | 81 -- ...nately, Garbage Collection isn-t Enough.md | 44 - ...ware delivery with value stream mapping.md | 94 -- ...on the Desktop- Are We Nearly There Yet.md | 344 ------- .../talk/20181209 Open source DIY ethics.md | 62 -- ... tips to help non-techies move to Linux.md | 111 --- ...-t Succeeded on Desktop- Linus Torvalds.md | 65 -- ...ipten, LDC and bindbc-sdl (translation).md | 276 ------ ...ch skill in 2019- What you need to know.md | 145 --- ...ons for artificial intelligence in 2019.md | 91 -- ... Don-t Use ZFS on Linux- Linus Torvalds.md | 82 -- ...g- gene signatures and connectivity map.md | 133 --- ...hannels are bad and you should feel bad.md | 443 ---------- sources/tech/20170115 Magic GOPATH.md | 119 --- ...eboard problems in pure Lambda Calculus.md | 836 ------------------ ...20171006 7 deadly sins of documentation.md | 85 -- ...nes and Android Architecture Components.md | 201 ----- ...ute Once with Xen Linux TPM 2.0 and TXT.md | 94 -- ...o Mint and Quicken for personal finance.md | 96 -- ...1114 Finding Files with mlocate- Part 2.md | 174 ---- ...ux Programs for Drawing and Image Editing.md | 130 --- ...1121 Finding Files with mlocate- Part 3.md | 142 --- ...eractive Workflows for Cpp with Jupyter.md | 301 ------- ...usiness Software Alternatives For Linux.md | 117 --- ...power of community with organized chaos.md | 110 --- ...erve Scientific and Medical Communities.md | 170 ---- ... millions of Linux users with Snapcraft.md | 321 ------- ...xtensions You Should Be Using Right Now.md | 307 ------- ...ings Flexibility and Choice to openSUSE.md | 114 --- ...n must include people with disabilities.md | 67 -- sources/tech/20171224 My first Rust macro.md | 145 --- .../20180108 Debbugs Versioning- Merging.md | 80 -- ...erTux- A Linux Take on Super Mario Game.md | 77 -- ...et a compelling reason to turn to Linux.md | 70 -- ...ures resolving symbol addresses is hard.md | 163 ---- ...180114 Playing Quake 4 on Linux in 2018.md | 80 -- ...To Create A Bootable Zorin OS USB Drive.md | 315 ------- ...Top 6 open source desktop email clients.md | 115 --- ... Perl module a minimalist web framework.md | 106 --- ...urity features installing apps and more.md | 245 ----- ... for using CUPS for printing with Linux.md | 101 --- ...here MQ programming in Python with Zato.md | 262 ------ ... and manage MacOS LaunchAgents using Go.md | 314 ------- ...security risks in open source libraries.md | 249 ------ .../tech/20180130 Trying Other Go Versions.md | 112 --- ...chem group subversion repository to Git.md | 223 ----- ...y a React App on a DigitalOcean Droplet.md | 199 ----- .../tech/20180202 CompositeAcceleration.md | 211 ----- ...h the openbox windows manager in Fedora.md | 216 ----- ...0205 Writing eBPF tracing tools in Rust.md | 258 ------ ...art writing macros in LibreOffice Basic.md | 332 ------- ...e to create interactive adventure games.md | 299 ------- ...20180211 Latching Mutations with GitOps.md | 60 -- ...t Is sosreport- How To Create sosreport.md | 195 ---- ...A Comparison of Three Linux -App Stores.md | 128 --- ...n source card and board games for Linux.md | 103 --- ...hite male asshole, by a former offender.md | 153 ---- ...o create an open source stack using EFK.md | 388 -------- .../tech/20180327 Anna A KVS for any scale.md | 139 --- ...n to the Flask Python web app framework.md | 451 ---------- ... Importer Tool Rewritten in C plus plus.md | 70 -- ...ipt to your Java enterprise with Vert.x.md | 362 -------- ...80411 5 Best Feed Reader Apps for Linux.md | 192 ---- ...custom Linux settings with DistroTweaks.md | 108 --- ... Getting started with Jenkins Pipelines.md | 352 -------- ...0180413 Redcore Linux Makes Gentoo Easy.md | 89 -- ...iting Advanced Web Applications with Go.md | 695 --------------- ...y way to add free books to your eReader.md | 179 ---- ...x filesystem forensics - Opensource.com.md | 342 ------- ...aging virtual environments with Vagrant.md | 488 ---------- ... An easy way to generate RPG characters.md | 136 --- ...istributed tracing system work together.md | 156 ---- ... Modularity in Fedora 28 Server Edition.md | 76 -- ...507 Multinomial Logistic Classification.md | 215 ----- ...ux Revives Your Older Computer [Review].md | 114 --- ...ghtBSD Could Be Your Gateway to FreeBSD.md | 180 ---- ...to the Pyramid web framework for Python.md | 617 ------------- ...ust, flexible virtual tabletop for RPGs.md | 216 ----- ...t The Historical Uptime Of Linux System.md | 330 ------- ...id into a Linux development environment.md | 81 -- ...w to Enable Click to Minimize On Ubuntu.md | 102 --- ... BSD Distribution for the Desktop Users.md | 147 --- ...ustralian TV Channels to a Raspberry Pi.md | 209 ----- ... Go runtime implements maps efficiently.md | 355 -------- ... Build an Amazon Echo with Raspberry Pi.md | 374 -------- ...1 3 open source music players for Linux.md | 128 --- ...Get Started with Snap Packages in Linux.md | 159 ---- ...How to Install and Use Flatpak on Linux.md | 167 ---- ...ping tools to extract data from the web.md | 207 ----- ...ing an older relative online with Linux.md | 76 -- ...n books for Linux and open source types.md | 113 --- ...e tools to make literature reviews easy.md | 73 -- ...Ledger for YNAB-like envelope budgeting.md | 143 --- ...h tips for everyday at the command line.md | 593 ------------- ...t apps with Pronghorn, a Java framework.md | 120 --- ...180621 Troubleshooting a Buildah script.md | 179 ---- ...corn Archimedes Games on a Raspberry Pi.md | 539 ----------- ...629 Discover hidden gems in LibreOffice.md | 97 -- ...ging Linux applications becoming a snap.md | 148 ---- ...Browse Stack Overflow From The Terminal.md | 188 ---- ...gs to do After Installing Linux Mint 19.md | 223 ----- ...702 5 open source alternatives to Skype.md | 101 --- ...v4 launch an optimism born of necessity.md | 91 -- ...Scheme for the Software Defined Vehicle.md | 88 -- ...6 Using Ansible to set up a workstation.md | 168 ---- ... simple and elegant free podcast player.md | 119 --- ...The aftermath of the Gentoo GitHub hack.md | 72 -- ...ource racing and flying games for Linux.md | 102 --- ... Snapshot And Restore Utility For Linux.md | 237 ----- ...mand Line With OpenSubtitlesDownload.py.md | 221 ----- ...ner image- Meeting the legal challenges.md | 64 -- ...tandard Notes for encrypted note-taking.md | 299 ------- ...lusively Created for Microsoft Exchange.md | 114 --- ...0180801 Migrating Perl 5 code to Perl 6.md | 77 -- ...2 Walkthrough On How To Use GNOME Boxes.md | 117 --- ...ora Server to create a router - gateway.md | 285 ------ ...NU Make to load 1.4GB of data every day.md | 126 --- ...ecryption Effect Seen On Sneakers Movie.md | 110 --- ...806 Use Gstreamer and Python to rip CDs.md | 312 ------- ...fix, an open source mail transfer agent.md | 334 ------- ...Quality sound, open source music player.md | 105 --- ...E- 6 reasons to love this Linux desktop.md | 71 -- ...garden with Edraw Max - FOSS adventures.md | 74 -- .../20180816 Garbage collection in Perl 6.md | 121 --- ...ryaLinux- A Distribution and a Platform.md | 224 ----- ... a new open source web development tool.md | 282 ------ ...r behaviour on my competitor-s websites.md | 117 --- ...owchart and diagramming tools for Linux.md | 186 ---- ... books to your eReader- Formatting tips.md | 183 ---- ...sktop Client With VODs And Chat Support.md | 126 --- ...20180829 4 open source monitoring tools.md | 143 --- sources/tech/20180829 Containers in Perl 6.md | 174 ---- ...0830 A quick guide to DNF for yum users.md | 131 --- ... your website across all mobile devices.md | 85 -- ...ow subroutine signatures work in Perl 6.md | 335 ------- ...reat Desktop for the Open Source Purist.md | 114 --- ...diobook Player For DRM-Free Audio Files.md | 72 -- ...st your own cloud with Raspberry Pi NAS.md | 128 --- ...r Own Streaming Media Server In Minutes.md | 171 ---- ...ibuted tracing in a microservices world.md | 113 --- ...untu Linux With Kazam -Beginner-s Guide.md | 185 ---- ...oint is a Delight for Stealth Game Fans.md | 104 --- ...s To Find Out Process ID (PID) In Linux.md | 208 ----- ... the Audiophile Linux distro for a spin.md | 161 ---- ...29 Use Cozy to Play Audiobooks in Linux.md | 138 --- .../tech/20181003 Manage NTP with Chrony.md | 291 ------ ... 4 Must-Have Tools for Monitoring Linux.md | 102 --- ... to access educational material offline.md | 107 --- ...erna, a web-based information organizer.md | 128 --- ...tion to Ansible Operators in Kubernetes.md | 81 -- ...ckage installation for the Raspberry Pi.md | 87 -- ...ting upstream releases with release-bot.md | 327 ------- ...source alternatives to Microsoft Access.md | 94 -- ...tive, JavaScript timeline building tool.md | 82 -- ...irmware Version from Linux Command Line.md | 131 --- ... data streams on the Linux command line.md | 302 ------- ... started with OKD on your Linux desktop.md | 407 --------- ...How to manage storage on Linux with LVM.md | 237 ----- ...es Installed From Particular Repository.md | 342 ------- ...s on running new software in production.md | 151 ---- ...Behind the scenes with Linux containers.md | 205 ----- ...o After Installing elementary OS 5 Juno.md | 260 ------ ...how C-- destructors are useful in Envoy.md | 130 --- ...20181122 Getting started with Jenkins X.md | 148 ---- ... scientific research Linux distribution.md | 79 -- ...tom documentation workflows with Sphinx.md | 126 --- ...How to test your network with PerfSONAR.md | 148 ---- ...nd Tutorial With Examples For Beginners.md | 192 ---- ...earch - Quick Search GUI Tool for Linux.md | 108 --- ... How to view XML files in a web browser.md | 109 --- ... Screen Recorders for the Linux Desktop.md | 177 ---- ...ent and delivery of a hybrid mobile app.md | 102 --- ...you document a tech project with comics.md | 100 --- ... Commands And Programs From Commandline.md | 265 ------ ...g Flood Element for performance testing.md | 180 ---- ...h Reliability Infrastructure Migrations.md | 78 -- ...using KeePassX to secure your passwords.md | 78 -- ...less Way of Using Google Drive on Linux.md | 137 --- ...Large files with Git- LFS and git-annex.md | 145 --- ...o Heaven With These 23 GNOME Extensions.md | 288 ------ ...226 -Review- Polo File Manager in Linux.md | 139 --- ... model of concurrent garbage collection.md | 62 -- ...1229 Some nonparametric statistics math.md | 178 ---- 290 files changed, 44787 deletions(-) delete mode 100644 sources/news/20200117 Fedora CoreOS out of preview.md delete mode 100644 sources/news/20200119 Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning.md delete mode 100644 sources/news/20200125 What 2020 brings for the developer, and more industry trends.md delete mode 100644 sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md delete mode 100644 sources/talk/20171030 Why I love technical debt.md delete mode 100644 sources/talk/20171107 How to Monetize an Open Source Project.md delete mode 100644 sources/talk/20171114 Why pair writing helps improve documentation.md delete mode 100644 sources/talk/20171115 Why and How to Set an Open Source Strategy.md delete mode 100644 sources/talk/20171116 Why is collaboration so difficult.md delete mode 100644 sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md delete mode 100644 sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md delete mode 100644 sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md delete mode 100644 sources/talk/20180112 in which the cost of structured data is reduced.md delete mode 100644 sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md delete mode 100644 sources/talk/20180131 How to write a really great resume that actually gets you hired.md delete mode 100644 sources/talk/20180206 UQDS- A software-development process that puts quality first.md delete mode 100644 sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md delete mode 100644 sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md delete mode 100644 sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md delete mode 100644 sources/talk/20180209 Why an involved user community makes for better software.md delete mode 100644 sources/talk/20180214 Can anonymity and accountability coexist.md delete mode 100644 sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md delete mode 100644 sources/talk/20180220 4 considerations when naming software development projects.md delete mode 100644 sources/talk/20180221 3 warning flags of DevOps metrics.md delete mode 100644 sources/talk/20180222 3 reasons to say -no- in DevOps.md delete mode 100644 sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md delete mode 100644 sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md delete mode 100644 sources/talk/20180301 How to hire the right DevOps talent.md delete mode 100644 sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md delete mode 100644 sources/talk/20180303 4 meetup ideas- Make your data open.md delete mode 100644 sources/talk/20180314 How to apply systems thinking in DevOps.md delete mode 100644 sources/talk/20180315 6 ways a thriving community will help your project succeed.md delete mode 100644 sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md delete mode 100644 sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md delete mode 100644 sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md delete mode 100644 sources/talk/20180321 8 tips for better agile retrospective meetings.md delete mode 100644 sources/talk/20180323 7 steps to DevOps hiring success.md delete mode 100644 sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md delete mode 100644 sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md delete mode 100644 sources/talk/20180404 Is the term DevSecOps necessary.md delete mode 100644 sources/talk/20180405 Rethinking -ownership- across the organization.md delete mode 100644 sources/talk/20180410 Microservices Explained.md delete mode 100644 sources/talk/20180412 Management, from coordination to collaboration.md delete mode 100644 sources/talk/20180416 For project safety back up your people, not just your data.md delete mode 100644 sources/talk/20180417 How to develop the FOSS leaders of the future.md delete mode 100644 sources/talk/20180418 Is DevOps compatible with part-time community teams.md delete mode 100644 sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md delete mode 100644 sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md delete mode 100644 sources/talk/20180424 There-s a Server in Every Serverless Platform.md delete mode 100644 sources/talk/20180511 Looking at the Lispy side of Perl.md delete mode 100644 sources/talk/20180527 Whatever Happened to the Semantic Web.md delete mode 100644 sources/talk/20180604 10 principles of resilience for women in tech.md delete mode 100644 sources/talk/20180613 AI Is Coming to Edge Computing Devices.md delete mode 100644 sources/talk/20180619 A summer reading list for open organization enthusiasts.md delete mode 100644 sources/talk/20180620 3 pitfalls everyone should avoid with hybrid multi-cloud, part 2.md delete mode 100644 sources/talk/20180622 7 tips for promoting your project and community on Twitter.md delete mode 100644 sources/talk/20180701 How to migrate to the world of Linux from Windows.md delete mode 100644 sources/talk/20180703 What Game of Thrones can teach us about open innovation.md delete mode 100644 sources/talk/20180704 Comparing Twine and Ren-Py for creating interactive fiction.md delete mode 100644 sources/talk/20180705 5 Reasons Open Source Certification Matters More Than Ever.md delete mode 100644 sources/talk/20180706 Robolinux Lets You Easily Run Linux and Windows Without Dual Booting.md delete mode 100644 sources/talk/20180711 Becoming a senior developer 9 experiences you ll encounter.md delete mode 100644 sources/talk/20180711 Open hardware meets open science in a multi-microphone hearing aid project.md delete mode 100644 sources/talk/20180716 Confessions of a recovering Perl hacker.md delete mode 100644 sources/talk/20180717 Tips for Success with Open Source Certification.md delete mode 100644 sources/talk/20180719 Finding Jobs in Software.md delete mode 100644 sources/talk/20180724 Open Source Certification- Preparing for the Exam.md delete mode 100644 sources/talk/20180724 Why moving all your workloads to the cloud is a bad idea.md delete mode 100644 sources/talk/20180726 Tech jargon- The good, the bad, and the ugly.md delete mode 100644 sources/talk/20180802 Design thinking as a way of life.md delete mode 100644 sources/talk/20180807 Becoming a successful programmer in an underrepresented community.md delete mode 100644 sources/talk/20180807 Building more trustful teams in four steps.md delete mode 100644 sources/talk/20180808 3 tips for moving your team to a microservices architecture.md delete mode 100644 sources/talk/20180809 How do tools affect culture.md delete mode 100644 sources/talk/20180813 Using D Features to Reimplement Inheritance and Polymorphism.md delete mode 100644 sources/talk/20180817 5 Things Influenza Taught Me About the Evolution of the Desktop Computer.md delete mode 100644 sources/talk/20180817 OERu makes a college education affordable.md delete mode 100644 sources/talk/20180820 Keeping patient data safe with open source tools.md delete mode 100644 sources/talk/20180831 3 innovative open source projects for the new school year.md delete mode 100644 sources/talk/20180906 DevOps- The consequences of blame.md delete mode 100644 sources/talk/20180916 The Rise and Demise of RSS (Old Version).md delete mode 100644 sources/talk/20180917 How gaming turned me into a coder.md delete mode 100644 sources/talk/20180920 Building a Secure Ecosystem for Node.js.md delete mode 100644 sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md delete mode 100644 sources/talk/20181003 13 tools to measure DevOps success.md delete mode 100644 sources/talk/20181007 Why it-s Easier to Get a Payrise by Switching Jobs.md delete mode 100644 sources/talk/20181009 4 best practices for giving open source code feedback.md delete mode 100644 sources/talk/20181010 Talk over text- Conversational interface design and usability.md delete mode 100644 sources/talk/20181011 How to level up your organization-s security expertise.md delete mode 100644 sources/talk/20181017 We already have nice things, and other reasons not to write in-house ops tools.md delete mode 100644 sources/talk/20181018 The case for open source classifiers in AI algorithms.md delete mode 100644 sources/talk/20181019 To BeOS or not to BeOS, that is the Haiku.md delete mode 100644 sources/talk/20181023 What MMORPGs can teach us about leveling up a heroic developer team.md delete mode 100644 sources/talk/20181024 5 tips for facilitators of agile meetings.md delete mode 100644 sources/talk/20181031 How open source hardware increases security.md delete mode 100644 sources/talk/20181107 5 signs you are doing continuous testing wrong - Opensource.com.md delete mode 100644 sources/talk/20181107 How open source in education creates new developers.md delete mode 100644 sources/talk/20181107 Understanding a -nix Shell by Writing One.md delete mode 100644 sources/talk/20181113 Have you seen these personalities in open source.md delete mode 100644 sources/talk/20181114 Analyzing the DNA of DevOps.md delete mode 100644 sources/talk/20181114 Is your startup built on open source- 9 tips for getting started.md delete mode 100644 sources/talk/20181121 A Closer Look at Voice-Assisted Speakers.md delete mode 100644 sources/talk/20181127 What the open source community means to me.md delete mode 100644 sources/talk/20181129 9 top tech-recruiting mistakes to avoid.md delete mode 100644 sources/talk/20181129 Why giving back is important to the DevOps culture.md delete mode 100644 sources/talk/20181130 3 emerging tipping points in open source.md delete mode 100644 sources/talk/20181205 5 reasons to give Linux for the holidays.md delete mode 100644 sources/talk/20181205 F-Words in Linux Kernel Code Replaced with -Hug.md delete mode 100644 sources/talk/20181205 Unfortunately, Garbage Collection isn-t Enough.md delete mode 100644 sources/talk/20181206 6 steps to optimize software delivery with value stream mapping.md delete mode 100644 sources/talk/20181209 Linux on the Desktop- Are We Nearly There Yet.md delete mode 100644 sources/talk/20181209 Open source DIY ethics.md delete mode 100644 sources/talk/20181217 8 tips to help non-techies move to Linux.md delete mode 100644 sources/talk/20181219 Fragmentation is Why Linux Hasn-t Succeeded on Desktop- Linus Torvalds.md delete mode 100644 sources/talk/20181220 D in the Browser with Emscripten, LDC and bindbc-sdl (translation).md delete mode 100644 sources/talk/20181231 Plans to learn a new tech skill in 2019- What you need to know.md delete mode 100644 sources/talk/20190205 7 predictions for artificial intelligence in 2019.md delete mode 100644 sources/talk/20200111 Don-t Use ZFS on Linux- Linus Torvalds.md delete mode 100644 sources/tech/20151127 Research log- gene signatures and connectivity map.md delete mode 100644 sources/tech/20160302 Go channels are bad and you should feel bad.md delete mode 100644 sources/tech/20170115 Magic GOPATH.md delete mode 100644 sources/tech/20170320 Whiteboard problems in pure Lambda Calculus.md delete mode 100644 sources/tech/20171006 7 deadly sins of documentation.md delete mode 100644 sources/tech/20171006 Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components.md delete mode 100644 sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md delete mode 100644 sources/tech/20171030 5 open source alternatives to Mint and Quicken for personal finance.md delete mode 100644 sources/tech/20171114 Finding Files with mlocate- Part 2.md delete mode 100644 sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md delete mode 100644 sources/tech/20171121 Finding Files with mlocate- Part 3.md delete mode 100644 sources/tech/20171129 Interactive Workflows for Cpp with Jupyter.md delete mode 100644 sources/tech/20171130 Excellent Business Software Alternatives For Linux.md delete mode 100644 sources/tech/20171130 Tap the power of community with organized chaos.md delete mode 100644 sources/tech/20171201 Linux Distros That Serve Scientific and Medical Communities.md delete mode 100644 sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md delete mode 100644 sources/tech/20171203 Top 20 GNOME Extensions You Should Be Using Right Now.md delete mode 100644 sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md delete mode 100644 sources/tech/20171222 Why the diversity and inclusion conversation must include people with disabilities.md delete mode 100644 sources/tech/20171224 My first Rust macro.md delete mode 100644 sources/tech/20180108 Debbugs Versioning- Merging.md delete mode 100644 sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md delete mode 100644 sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md delete mode 100644 sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md delete mode 100644 sources/tech/20180114 Playing Quake 4 on Linux in 2018.md delete mode 100644 sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md delete mode 100644 sources/tech/20180119 Top 6 open source desktop email clients.md delete mode 100644 sources/tech/20180126 An introduction to the Web Simple Perl module a minimalist web framework.md delete mode 100644 sources/tech/20180129 CopperheadOS Security features installing apps and more.md delete mode 100644 sources/tech/20180129 Tips and tricks for using CUPS for printing with Linux.md delete mode 100644 sources/tech/20180129 WebSphere MQ programming in Python with Zato.md delete mode 100644 sources/tech/20180130 Create and manage MacOS LaunchAgents using Go.md delete mode 100644 sources/tech/20180130 Mitigating known security risks in open source libraries.md delete mode 100644 sources/tech/20180130 Trying Other Go Versions.md delete mode 100644 sources/tech/20180131 Migrating the debichem group subversion repository to Git.md delete mode 100644 sources/tech/20180201 I Built This - Now What How to deploy a React App on a DigitalOcean Droplet.md delete mode 100644 sources/tech/20180202 CompositeAcceleration.md delete mode 100644 sources/tech/20180205 Getting Started with the openbox windows manager in Fedora.md delete mode 100644 sources/tech/20180205 Writing eBPF tracing tools in Rust.md delete mode 100644 sources/tech/20180208 How to start writing macros in LibreOffice Basic.md delete mode 100644 sources/tech/20180209 How to use Twine and SugarCube to create interactive adventure games.md delete mode 100644 sources/tech/20180211 Latching Mutations with GitOps.md delete mode 100644 sources/tech/20180307 What Is sosreport- How To Create sosreport.md delete mode 100644 sources/tech/20180309 A Comparison of Three Linux -App Stores.md delete mode 100644 sources/tech/20180314 5 open source card and board games for Linux.md delete mode 100644 sources/tech/20180319 How to not be a white male asshole, by a former offender.md delete mode 100644 sources/tech/20180326 How to create an open source stack using EFK.md delete mode 100644 sources/tech/20180327 Anna A KVS for any scale.md delete mode 100644 sources/tech/20180402 An introduction to the Flask Python web app framework.md delete mode 100644 sources/tech/20180403 Open Source Accounting Program GnuCash 3.0 Released With a New CSV Importer Tool Rewritten in C plus plus.md delete mode 100644 sources/tech/20180404 Bring some JavaScript to your Java enterprise with Vert.x.md delete mode 100644 sources/tech/20180411 5 Best Feed Reader Apps for Linux.md delete mode 100644 sources/tech/20180411 Replicate your custom Linux settings with DistroTweaks.md delete mode 100644 sources/tech/20180412 Getting started with Jenkins Pipelines.md delete mode 100644 sources/tech/20180413 Redcore Linux Makes Gentoo Easy.md delete mode 100644 sources/tech/20180419 Writing Advanced Web Applications with Go.md delete mode 100644 sources/tech/20180420 A handy way to add free books to your eReader.md delete mode 100644 sources/tech/20180423 Breach detection with Linux filesystem forensics - Opensource.com.md delete mode 100644 sources/tech/20180423 Managing virtual environments with Vagrant.md delete mode 100644 sources/tech/20180430 PCGen- An easy way to generate RPG characters.md delete mode 100644 sources/tech/20180503 How the four components of a distributed tracing system work together.md delete mode 100644 sources/tech/20180507 Modularity in Fedora 28 Server Edition.md delete mode 100644 sources/tech/20180507 Multinomial Logistic Classification.md delete mode 100644 sources/tech/20180509 4MLinux Revives Your Older Computer [Review].md delete mode 100644 sources/tech/20180511 MidnightBSD Could Be Your Gateway to FreeBSD.md delete mode 100644 sources/tech/20180514 An introduction to the Pyramid web framework for Python.md delete mode 100644 sources/tech/20180514 MapTool- A robust, flexible virtual tabletop for RPGs.md delete mode 100644 sources/tech/20180514 Tuptime - A Tool To Report The Historical Uptime Of Linux System.md delete mode 100644 sources/tech/20180515 Termux turns Android into a Linux development environment.md delete mode 100644 sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md delete mode 100644 sources/tech/20180524 TrueOS- A Simple BSD Distribution for the Desktop Users.md delete mode 100644 sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md delete mode 100644 sources/tech/20180529 How the Go runtime implements maps efficiently.md delete mode 100644 sources/tech/20180531 How to Build an Amazon Echo with Raspberry Pi.md delete mode 100644 sources/tech/20180601 3 open source music players for Linux.md delete mode 100644 sources/tech/20180601 Get Started with Snap Packages in Linux.md delete mode 100644 sources/tech/20180608 How to Install and Use Flatpak on Linux.md delete mode 100644 sources/tech/20180608 How to use screen scraping tools to extract data from the web.md delete mode 100644 sources/tech/20180609 4 tips for getting an older relative online with Linux.md delete mode 100644 sources/tech/20180611 12 fiction books for Linux and open source types.md delete mode 100644 sources/tech/20180612 7 open source tools to make literature reviews easy.md delete mode 100644 sources/tech/20180612 Using Ledger for YNAB-like envelope budgeting.md delete mode 100644 sources/tech/20180614 Bash tips for everyday at the command line.md delete mode 100644 sources/tech/20180618 Write fast apps with Pronghorn, a Java framework.md delete mode 100644 sources/tech/20180621 Troubleshooting a Buildah script.md delete mode 100644 sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md delete mode 100644 sources/tech/20180629 Discover hidden gems in LibreOffice.md delete mode 100644 sources/tech/20180629 Is implementing and managing Linux applications becoming a snap.md delete mode 100644 sources/tech/20180629 SoCLI - Easy Way To Search And Browse Stack Overflow From The Terminal.md delete mode 100644 sources/tech/20180701 12 Things to do After Installing Linux Mint 19.md delete mode 100644 sources/tech/20180702 5 open source alternatives to Skype.md delete mode 100644 sources/tech/20180702 Diggs v4 launch an optimism born of necessity.md delete mode 100644 sources/tech/20180703 AGL Outlines Virtualization Scheme for the Software Defined Vehicle.md delete mode 100644 sources/tech/20180706 Using Ansible to set up a workstation.md delete mode 100644 sources/tech/20180708 simple and elegant free podcast player.md delete mode 100644 sources/tech/20180710 The aftermath of the Gentoo GitHub hack.md delete mode 100644 sources/tech/20180711 5 open source racing and flying games for Linux.md delete mode 100644 sources/tech/20180723 System Snapshot And Restore Utility For Linux.md delete mode 100644 sources/tech/20180727 Download Subtitles Via Right Click From File Manager Or Command Line With OpenSubtitlesDownload.py.md delete mode 100644 sources/tech/20180731 What-s in a container image- Meeting the legal challenges.md delete mode 100644 sources/tech/20180801 Getting started with Standard Notes for encrypted note-taking.md delete mode 100644 sources/tech/20180801 Hiri is a Linux Email Client Exclusively Created for Microsoft Exchange.md delete mode 100644 sources/tech/20180801 Migrating Perl 5 code to Perl 6.md delete mode 100644 sources/tech/20180802 Walkthrough On How To Use GNOME Boxes.md delete mode 100644 sources/tech/20180803 How to use Fedora Server to create a router - gateway.md delete mode 100644 sources/tech/20180806 How ProPublica Illinois uses GNU Make to load 1.4GB of data every day.md delete mode 100644 sources/tech/20180806 Recreate Famous Data Decryption Effect Seen On Sneakers Movie.md delete mode 100644 sources/tech/20180806 Use Gstreamer and Python to rip CDs.md delete mode 100644 sources/tech/20180809 Getting started with Postfix, an open source mail transfer agent.md delete mode 100644 sources/tech/20180810 Strawberry- Quality sound, open source music player.md delete mode 100644 sources/tech/20180815 Happy birthday, GNOME- 6 reasons to love this Linux desktop.md delete mode 100644 sources/tech/20180816 Designing your garden with Edraw Max - FOSS adventures.md delete mode 100644 sources/tech/20180816 Garbage collection in Perl 6.md delete mode 100644 sources/tech/20180817 AryaLinux- A Distribution and a Platform.md delete mode 100644 sources/tech/20180817 Cloudgizer- An introduction to a new open source web development tool.md delete mode 100644 sources/tech/20180821 How I recorded user behaviour on my competitor-s websites.md delete mode 100644 sources/tech/20180822 9 flowchart and diagramming tools for Linux.md delete mode 100644 sources/tech/20180824 Add free books to your eReader- Formatting tips.md delete mode 100644 sources/tech/20180828 Orion Is A QML - C-- Twitch Desktop Client With VODs And Chat Support.md delete mode 100644 sources/tech/20180829 4 open source monitoring tools.md delete mode 100644 sources/tech/20180829 Containers in Perl 6.md delete mode 100644 sources/tech/20180830 A quick guide to DNF for yum users.md delete mode 100644 sources/tech/20180830 How to scale your website across all mobile devices.md delete mode 100644 sources/tech/20180912 How subroutine signatures work in Perl 6.md delete mode 100644 sources/tech/20180914 Freespire Linux- A Great Desktop for the Open Source Purist.md delete mode 100644 sources/tech/20180918 Cozy Is A Nice Linux Audiobook Player For DRM-Free Audio Files.md delete mode 100644 sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md delete mode 100644 sources/tech/20180919 Streama - Setup Your Own Streaming Media Server In Minutes.md delete mode 100644 sources/tech/20180920 Distributed tracing in a microservices world.md delete mode 100644 sources/tech/20180920 Record Screen in Ubuntu Linux With Kazam -Beginner-s Guide.md delete mode 100644 sources/tech/20180923 Gunpoint is a Delight for Stealth Game Fans.md delete mode 100644 sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md delete mode 100644 sources/tech/20180925 Taking the Audiophile Linux distro for a spin.md delete mode 100644 sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md delete mode 100644 sources/tech/20181003 Manage NTP with Chrony.md delete mode 100644 sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md delete mode 100644 sources/tech/20181005 How to use Kolibri to access educational material offline.md delete mode 100644 sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md delete mode 100644 sources/tech/20181015 An introduction to Ansible Operators in Kubernetes.md delete mode 100644 sources/tech/20181016 piwheels- Speedy Python package installation for the Raspberry Pi.md delete mode 100644 sources/tech/20181017 Automating upstream releases with release-bot.md delete mode 100644 sources/tech/20181018 4 open source alternatives to Microsoft Access.md delete mode 100644 sources/tech/20181018 TimelineJS- An interactive, JavaScript timeline building tool.md delete mode 100644 sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md delete mode 100644 sources/tech/20181031 Working with data streams on the Linux command line.md delete mode 100644 sources/tech/20181101 Getting started with OKD on your Linux desktop.md delete mode 100644 sources/tech/20181105 How to manage storage on Linux with LVM.md delete mode 100644 sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md delete mode 100644 sources/tech/20181111 Some notes on running new software in production.md delete mode 100644 sources/tech/20181112 Behind the scenes with Linux containers.md delete mode 100644 sources/tech/20181115 11 Things To Do After Installing elementary OS 5 Juno.md delete mode 100644 sources/tech/20181118 An example of how C-- destructors are useful in Envoy.md delete mode 100644 sources/tech/20181122 Getting started with Jenkins X.md delete mode 100644 sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md delete mode 100644 sources/tech/20181128 Building custom documentation workflows with Sphinx.md delete mode 100644 sources/tech/20181128 How to test your network with PerfSONAR.md delete mode 100644 sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md delete mode 100644 sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md delete mode 100644 sources/tech/20181206 How to view XML files in a web browser.md delete mode 100644 sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md delete mode 100644 sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md delete mode 100644 sources/tech/20181209 How do you document a tech project with comics.md delete mode 100644 sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md delete mode 100644 sources/tech/20181214 Tips for using Flood Element for performance testing.md delete mode 100644 sources/tech/20181215 New talk- High Reliability Infrastructure Migrations.md delete mode 100644 sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md delete mode 100644 sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md delete mode 100644 sources/tech/20181221 Large files with Git- LFS and git-annex.md delete mode 100644 sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md delete mode 100644 sources/tech/20181226 -Review- Polo File Manager in Linux.md delete mode 100644 sources/tech/20181228 The office coffee model of concurrent garbage collection.md delete mode 100644 sources/tech/20181229 Some nonparametric statistics math.md diff --git a/sources/news/20200117 Fedora CoreOS out of preview.md b/sources/news/20200117 Fedora CoreOS out of preview.md deleted file mode 100644 index d7a1393cde..0000000000 --- a/sources/news/20200117 Fedora CoreOS out of preview.md +++ /dev/null @@ -1,108 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Fedora CoreOS out of preview) -[#]: via: (https://fedoramagazine.org/fedora-coreos-out-of-preview/) -[#]: author: (bgilbert https://fedoramagazine.org/author/bgilbert/) - -Fedora CoreOS out of preview -====== - -![The Fedora CoreOS logo on a gray background.][1] - -The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now [available for general use][2]. - -Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both [Fedora Atomic Host][3] and [CoreOS Container Linux][4] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][5]. - -Some highlights of the current Fedora CoreOS release: - - * [Automatic updates][6], with staged deployments and phased rollouts - * Built from Fedora 31, featuring: - * Linux 5.4 - * systemd 243 - * Ignition 2.1 - * OCI and Docker Container support via Podman 1.7 and Moby 18.09 - * cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration - - - -Fedora CoreOS is available on a variety of platforms: - - * Bare metal, QEMU, OpenStack, and VMware - * Images available in all public AWS regions - * Downloadable cloud images for Alibaba, AWS, Azure, and GCP - * Can run live from RAM via ISO and PXE (netboot) images - - - -Fedora CoreOS is under active development.  Planned future enhancements include: - - * Addition of the _next_ release stream for extended testing of upcoming Fedora releases. - * Support for additional cloud and virtualization platforms, and processor architectures other than _x86_64_. - * Closer integration with Kubernetes distributions, including [OKD][7]. - * [Aggregate statistics collection][8]. - * Additional [documentation][9]. - - - -### Where do I get it? - -To try out the new release, head over to the [download page][10] to get OS images or cloud image IDs.  Then use the [quick start guide][11] to get a machine running quickly. - -### How do I get involved? - -It’s easy!  You can report bugs and missing features to the [issue tracker][12]. You can also discuss Fedora CoreOS in [Fedora Discourse][13], the [development mailing list][14], in _#fedora-coreos_ on Freenode, or at our [weekly IRC meetings][15]. - -### Are there stability guarantees? - -In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  We’ve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems. - -We’ll continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the [coreos-status mailing list][16], along with recommended mitigations. - -### How do I migrate from CoreOS Container Linux? - -Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend [writing a new Fedora CoreOS Config][11] to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine. - -Whether you’re currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, you’ll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with [NetworkManager key files][17] instead of _systemd-networkd_, and time synchronization is performed by _chrony_ rather than _systemd-timesyncd_.  Initial migration documentation will be [available soon][9] and a skeleton list of differences between the two OSes is available in [this issue][18]. - -CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  We’ll announce the exact end-of-life date later this month. - -### How do I migrate from Fedora Atomic Host? - -Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend [writing a Fedora CoreOS Config][11] and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, you’ll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS. - -Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think! - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/fedora-coreos-out-of-preview/ - -作者:[bgilbert][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/bgilbert/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/introducing-fedora-coreos-816x345.png -[2]: https://getfedora.org/coreos/ -[3]: https://www.projectatomic.io/ -[4]: https://coreos.com/os/docs/latest/ -[5]: https://fedoramagazine.org/introducing-fedora-coreos/ -[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/ -[7]: https://www.okd.io/ -[8]: https://github.com/coreos/fedora-coreos-pinger/ -[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/ -[10]: https://getfedora.org/coreos/download/ -[11]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/ -[12]: https://github.com/coreos/fedora-coreos-tracker/issues -[13]: https://discussion.fedoraproject.org/c/server/coreos -[14]: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/ -[15]: https://github.com/coreos/fedora-coreos-tracker#meetings -[16]: https://lists.fedoraproject.org/archives/list/coreos-status@lists.fedoraproject.org/ -[17]: https://developer.gnome.org/NetworkManager/stable/nm-settings-keyfile.html -[18]: https://github.com/coreos/fedora-coreos-tracker/issues/159 diff --git a/sources/news/20200119 Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning.md b/sources/news/20200119 Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning.md deleted file mode 100644 index 90b5c18537..0000000000 --- a/sources/news/20200119 Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning.md +++ /dev/null @@ -1,80 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning) -[#]: via: (https://opensource.com/article/20/1/news-january-19) -[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) - -Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning -====== -Catch up on the biggest open source headlines from the past two weeks. -![Weekly news roundup with TV][1] - -In this edition of our open source news roundup, we take a look machine learning tools from Uber and Lyft, open source software to fight cancer, saving students money with open textbooks, and more! - -### Uber and Lyft release machine learning tools - -It's hard to a growing company these days that doesn't take advantage of machine learning to streamline its business and make sense of the data it amasses. Ridesharing companies, which gather massive amounts of data, have enthusiastically embraced the promise of machine learning. Two of the biggest players in the ridesharing sector have made some of their machine learning code open source. - -Uber recently [released the source code][2] for its Manifold tool for debugging machine learning models. According to Uber software engineer Lezhi Li, Manifold will "benefit the machine learning (ML) community by providing interpretability and debuggability for ML workflows." If you're interested, you can browse Manifold's source code [on GitHub][3]. - -Lyft has also upped its open source stakes by releasing Flyte. Flyte, whose source code is [available on GitHub][4], manages machine learning pipelines and "is an essential backbone to (Lyft's) operations." Lyft has been using it to train AI models and process data "across pricing, logistics, mapping, and autonomous projects." - -### Software to detect cancer cells - -In a study recently published in _Nature Biotechnology_, a team of medical researchers from around the world announced [new open source software][5] that "could make it easier to create personalised cancer treatment plans." - -The software assesses "the proportion of cancerous cells in a tumour sample" and can help clinicians "judge the accuracy of computer predictions and establish benchmarks" across tumor samples. Maxime Tarabichi, one of the lead authors of [the study][6], said that the software "provides a foundation which will hopefully become a much-needed, unbiased, gold-standard benchmarking tool for assessing models that aim to characterise a tumour’s genetic diversity." - -### University of Regina saves students over $1 million with open textbooks - -If rising tuition costs weren't enough to send university student spiralling into debt, the high prices of textbooks can deepen the crater in their bank accounts. To help ease that financial pain, many universities turn to open textbooks. One of those schools is the University of Regina. By offering open text books, the university [expects to save a huge amount for students][7] over the next five years. - -The expected savings are in the region of $1.5 million (CAD), or around $1.1 million USD (at the time of writing). The textbooks, according to a report by radio station CKOM, are "provided free for (students) and they can be printed off or used as e-books." Students aren't getting inferior-quality textbooks, though. Nilgun Onder of the University of Regina said that the "textbooks and other open education resources the university published are all peer-reviewed resources. In other words, they are reliable and credible." - -### Tesla adopts Coreboot - -Much of the software driving (no pun intended) the electric vehicles made by Tesla Motors is open source. So it's not surprising to learn that the company has [adopted Coreboot][8] "as part of their electric vehicle computer systems." - -Coreboot was developed as a replacement for proprietary BIOS and is used to boot hardware and the Linux kernel. The code, which is in [Tesla's GitHub repository][9], "is from Tesla Motors and Samsung," according to Phoronix. Samsung, in case you're wondering, makes the chip on which Tesla's self-driving software runs. - -#### In other news - - * [Arduino launches new modular platform for IoT development][10] - * [SUSE and Karunya Institute of Technology and Sciences collaborate to enhance cloud and open source learning][11] - * [How open-source code could help us survive natural disasters][12] - * [The hottest thing in robotics is an open source project you've never heard of][13] - - - -_Thanks, as always, to Opensource.com staff members and moderators for their help this week. Make sure to check out [our event calendar][14], to see what's happening next week in open source._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/1/news-january-19 - -作者:[Scott Nesbitt][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV) -[2]: https://venturebeat.com/2020/01/07/uber-open-sources-manifold-a-visual-tool-for-debugging-ai-models/ -[3]: https://github.com/uber/manifold -[4]: https://github.com/lyft/flyte -[5]: https://www.cbronline.com/industry/healthcare/open-source-cancer-cells/ -[6]: https://www.nature.com/articles/s41587-019-0364-z -[7]: https://www.ckom.com/2020/01/07/open-source-program-to-save-u-of-r-students-1-5m/ -[8]: https://www.phoronix.com/scan.php?page=news_item&px=Tesla-Uses-Coreboot -[9]: https://github.com/teslamotors/coreboot -[10]: https://techcrunch.com/2020/01/07/arduino-launches-a-new-modular-platform-for-iot-development/ -[11]: https://www.crn.in/news/suse-and-karunya-institute-of-technology-and-sciences-collaborate-to-enhance-cloud-and-open-source-learning/ -[12]: https://qz.com/1784867/open-source-data-could-help-save-lives-during-natural-disasters/ -[13]: https://www.techrepublic.com/article/the-hottest-thing-in-robotics-is-an-open-source-project-youve-never-heard-of/ -[14]: https://opensource.com/resources/conferences-and-events-monthly diff --git a/sources/news/20200125 What 2020 brings for the developer, and more industry trends.md b/sources/news/20200125 What 2020 brings for the developer, and more industry trends.md deleted file mode 100644 index e22735d21c..0000000000 --- a/sources/news/20200125 What 2020 brings for the developer, and more industry trends.md +++ /dev/null @@ -1,61 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (What 2020 brings for the developer, and more industry trends) -[#]: via: (https://opensource.com/article/20/1/hybrid-developer-future-industry-trends) -[#]: author: (Tim Hildred https://opensource.com/users/thildred) - -What 2020 brings for the developer, and more industry trends -====== -A weekly look at open source community and industry trends. -![Person standing in front of a giant computer screen with numbers, data][1] - -As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. - -## [How developers will work in 2020][2] - -> Developers have been spending an enormous amount of time on everything *except* making software that solves problems. ‘DevOps’ has transmogrified from ‘developers releasing software’ into ‘developers building ever more complex infrastructure atop Kubernetes’ and ‘developers reinventing their software as distributed stateless functions.’ In 2020, ‘serverless’ will mature. Handle state. Handle data storage without requiring devs to learn yet-another-proprietary-database-service. Learning new stuff is fun-but shipping is even better, and we’ll finally see systems and services that support that. - -**The impact:** A lot of forces are converging to give developers superpowers. There are ever more open source building blocks in place; thousands of geniuses are collaborating to make developer workflows more fun and efficient, and artificial intelligences are being brought to bear solving the types of problems a developer might face. On the one hand, there is clear leverage to giving developer superpowers: if they can make magic with software they'll be able to make even bigger magic with all this help. On the other hand, imagine if teachers had the same level of investment and support. Makes ya wonder don't it? - -## [2020 forecast: Cloud-y with a chance of hybrid][3] - -> Behind this growth is an array of new themes and strategies that are pushing cloud further up business agendas the world over. With ‘emerging’ technologies, such as AI and machine learning, containers and functions, and even more flexibility available with hybrid cloud solutions being provided by the major providers, it’s no wonder cloud is set to take centre stage. - -**The impact:** Hybrid cloud finally has the same level of flesh that public cloud and on-premises have. Over the course of 2019 especially the competing visions offered for what it meant to be hybrid formed a composite that drove home why someone would want it. At the same time more and more of the technology pieces that make hybrid viable are in place and maturing. 2019 was the year that people truly "got" hybrid. 2020 will be the year that people start to take advantage of it. - -## [The no-code delusion][4] - -> Increasingly popular in the last couple of years, I think 2020 is going to be the year of “no code”: the movement that says you can write business logic and even entire applications without having the training of a software developer. I empathise with people doing this, and I think some of the “no code” tools are great. But I also thing it’s wrong at heart. - -**The impact:** I've heard many devs say it over many years: "software development is hard." It would be a mistake to interpret that as "all software development is equally hard." What I've always found hard about learning to code is trying to think in a way that a computer will understand. With or without code, making computers do complex things will always require a different kind of thinking. - -## [All things Java][5] - -> The open, multi-vendor model has been a major strength—it’s very hard for any single vendor to pioneer a market for a sustained period of time—and taking different perspectives from diverse industries has been a key strength of the [evolution of Java][6]. Choosing to open source Java in 2006 was also a decision that only worked to strengthen the Java ecosystem, as it allowed Sun Microsystems and later Oracle to share the responsibility of maintaining and evolving Java with many other organizations and individuals. - -**The impact:** The things that move quickly in technology are the things that can be thrown away. When you know you're going to keep something for a long time, you're likely to make different choices about what to prioritize when building it. Disposable and long-lived both have their places, and the Java community made enough good decisions over the years that the language itself can have a foot in both camps. - -_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends - -作者:[Tim Hildred][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/thildred -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) -[2]: https://thenextweb.com/readme/2020/01/15/how-developers-will-work-in-2020/ -[3]: https://www.itproportal.com/features/2020-forecast-cloud-y-with-a-chance-of-hybrid/ -[4]: https://www.alexhudson.com/2020/01/13/the-no-code-delusion/ -[5]: https://appdevelopermagazine.com/all-things-java/ -[6]: https://appdevelopermagazine.com/top-10-developer-technologies-in-2019/ diff --git a/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md b/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md deleted file mode 100644 index 923d4618a9..0000000000 --- a/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md +++ /dev/null @@ -1,221 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry) -[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html) -[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com) - -The Ultimate Guide to JavaScript Fatigue: Realities of our industry -====== - -**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships. - -Last week I’ve done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**. - -One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Don’t Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts I’ve developed during these last years working in the tech industry. - -This first section is gonna be a bit philosophical, but I swear it will be worth reading. - -### Realities of Our Industry 101 - -Just like Patrick has done in [his post][1], let’s start with the most basic and essential truth about our industry: - -Software solves business problems - -This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but that’s a whole other subject. - -I’m sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries): - -**Cost versus Revenue** - -**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run. - -You are not paid to write code - -**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**. - -Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue. - -The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what I’m talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals. - -Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And I’m not saying the people behind them just want money, I’m sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist. - -Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”. - -And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people? - -In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**. - -The whole reason why Design and Requirements exist is that they define what problems we’re going to solve and solving problems is what generates revenue. - -> Without requirements or design, programming is the art of adding bugs to an empty text file. -> -> * Louis Srygley -> - - -This same principle also applies to the tools we’ve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because: - -JS Fatigue happens when people use tools they don't need to solve problems they don't have. - -As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that. - -This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this I’m not just talking about testing. **I’m talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear. - -Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis. - -Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home? - -### But what about JavaScript? - -By the time I’m writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published. - -And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.** - -Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babel’s history I highly recommend that you read [this excellent post by Henry Zhu][5]. - -Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify. - -And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies. - -Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we don’t need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive. - -This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are. - -But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays we’re using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications. - -This evolution also creates problems we need to solve. PWAs, for example, do not exist only because they’re cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**. - -And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. We’re solving problems all the time and **we are allowing natural selection to do its job**. - -The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve. - -By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well. - -We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones. - -The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if we’re using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments. - -### How to Deal With It - -My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you don’t need to know everything**. Trying to learn it all at once, even when we don’t have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. I’m not saying that you should be lazy, I’m just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn. - -Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you won’t know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly. - -But please, don’t get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself. - -And please, don’t get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same. - -If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago. - -In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC. - -Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago. - -Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, I’m sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as we’ve talked before. - -If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself. - -In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something: - -> What I cannot create, I do not understand - -And just below this phrase, [in the same blackboard, Richard also wrote][9]: - -> Know how to solve every problem that has been solved - -Isn’t this just amazing? - -When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves. - -This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11]. - -So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? I’m sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot. - -Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists. - -And since we love comparing our role to the ones related to civil engineering, let’s do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12]. - -We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years. - -When was the last time you’ve seen a bridge falling and when was the last time your telephone or your browser crashed? - -In order to explain this, I’ll use an example I love. - -This is the beautiful and awesome city of Barcelona: - -![The City of Barcelona][13] - -When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks: - -![Barcelona from above][14] - -As you can see, every block has the same size and all of them are very organized. If you’ve ever been to Barcelona you will also know how good it is to move through the city and how well it works. - -But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes. - -This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to. - -So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place. - -Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual. - -So **act like a town planner, let your software grow and adapt as needed**. - -By doing this you will also have better abstractions and know when it’s the right time to adopt them. - -As Sam Koblenski says: - -> Abstractions only work well in the right context, and the right context develops as the system develops. - -Nowadays something I see very often is people looking for boilerplates when they’re trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when you’re starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you won’t learn how to set up a project and you won’t understand exactly where each piece of the software you are using fits. - -When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so that’s another way of accomplishing your goal. You should not only work harder, you should work smarter. - -Probably someone has already had the same problem as you’re having right now, but if nobody did it might be your time to shine and build your own solution and help other people. - -But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**. - -By talking to people you share experiences that help each other’s careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems. - -Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. I’m sure you have already had a problem you could not find a solution for because you didn’t know exactly what was happening and therefore didn’t know what was the right question to ask. - -But if I needed to sum this whole post in a single advice, it would be: - -Solve problems. - -Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples’ lives. Software exists to push the world forward. - -**Now it’s your time to go out there and solve problems**. - - --------------------------------------------------------------------------------- - -via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html - -作者:[Lucas Fernandes Da Costa][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://lucasfcosta.com -[b]: https://github.com/lujun9972 -[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/ -[2]: http://ieeexplore.ieee.org/document/1702333/ -[3]: https://en.wikipedia.org/wiki/Test_Driven_Development -[4]: https://en.wikipedia.org/wiki/Analysis_paralysis -[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel -[6]: http://requirejs.org -[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/ -[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1 -[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand -[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch -[11]: https://jasonformat.com/wtf-is-jsx/ -[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760 -[13]: /assets/barcelona-city.jpeg -[14]: /assets/barcelona-above.jpeg -[15]: https://twitter.com/thewizardlucas diff --git a/sources/talk/20171030 Why I love technical debt.md b/sources/talk/20171030 Why I love technical debt.md deleted file mode 100644 index da071d370a..0000000000 --- a/sources/talk/20171030 Why I love technical debt.md +++ /dev/null @@ -1,69 +0,0 @@ -Why I love technical debt -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory1.png?itok=nbSRovsj) -This is not necessarily the title you'd expect for an article, I guess,* but I'm a fan of [technical debt][1]. There are two reasons for this: a Bad Reason and a Good Reason. I'll be upfront about the Bad Reason first, then explain why even that isn't really a reason to love it. I'll then tackle the Good Reason, and you'll nod along in agreement. - -### The Bad Reason I love technical debt - -We'll get this out of the way, then, shall we? The Bad Reason is that, well, there's just lots of it, it's interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at. I suppose those aren't all bad things. It can also be a bit depressing, because there's always so much of it, it's not always interesting, and sometimes I need to get involved even when I might have better things to do. - -And what's worse is that it almost always seems to be security-related, and it's always there. That's the bad part. - -Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time it deserves, or done by people who have half an idea, but don't quite fully grasp it. I should be clear at this point: I'm not saying that this last reason is those people's fault. That people know they need security is fantastic. If we (the security folks) or we (the organization) haven't done a good enough job in making sufficient security resources--whether people, training, or visibility--available to those people who need it, the fact that they're trying is great and something we can work on. Let's call that a positive. Or at least a reason for hope.*** - -### The Good Reason I love technical debt - -Let's get on to the other reason: the legitimate reason. I love technical debt when it's named. - -What does that mean? - -We all get that technical debt is a bad thing. It's what happens when you make decisions for pragmatic reasons that are likely to come back and bite you later in a project's lifecycle. Here are a few classic examples that relate to security: - - * Not getting around to applying authentication or authorization controls on APIs that might, at some point, be public. - * Lumping capabilities together so it's difficult to separate out appropriate roles later on. - * Hard-coding roles in ways that don't allow for customisation by people who may use your application in different ways from those you initially considered. - * Hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later. - - - -There are lots more, of course, but those are just a few that jump out at me and that I've seen over the years. Technical debt means making decisions that will mean more work later on to fix them. And that can't be good, can it? - -There are two words in the preceding paragraphs that should make us happy: they are "decisions" and "pragmatic." Because, in order for something to be named technical debt, I'd argue, it has to have been subject to conscious decision-making, and trade-offs must have been made--hopefully for rational reasons. Those reasons may be many and various--lack of qualified resources; project deadlines; lack of sufficient requirement definition--but if they've been made consciously, then the technical debt can be named, and if technical debt can be named, it can be documented. - -And if it's documented, we're halfway there. As a security guy, I know that I can't force everything that goes out of the door to meet all the requirements I'd like--but the same goes for the high availability gal, the UX team, the performance folks, etc. - -What we need--what we all need--is for documentation to exist about why decisions were made, because when we return to the problem we'll know it was thought about. And, what's more, the recording of that information might even make it into product documentation. "This API is designed to be used in a protected environment and should not be exposed on the public Internet" is a great piece of documentation. It may not be what a customer is looking for, but at least they know how to deploy the product, and, crucially, it's an opportunity for them to come back to the product manager and say, "We'd really like to deploy that particular API in this way. Could you please add this as a feature request?" Product managers like that. Very much.**** - -The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code,***** then there's a decent chance that they'll record some ideas about how this should be done in the future. If you're really lucky, they might even add some hooks in the code to make it easier (an "auth" parameter on the API, which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file that currently only accepts one option, but is at least checked by the code). - -I've been a bit disingenuous, I know, by defining technical debt as named technical debt. But honestly, if it's not named, then you can't know what it is, and until you know what it is, you can't fix it.******* My advice is this: when you're doing a release close-down (or in your weekly standup--EVERY weekly standup), have an agenda item to record technical debt. Name it, document it, be proud, sleep at night. - -* Well, apart from the obvious clickbait reason--for which I'm (a little) sorry. - -** I nearly wrote "poke my nose into." - -*** Work with me here. - -**** If you're software engineer/coder/hacker, here's a piece of advice: Learn to talk to product managers like real people, and treat them nicely. They (the better ones, at least) are invaluable allies when you need to prioritize features or have tricky trade-offs to make. - -***** Do this. Just do it. Documentation that isn't at least mirrored in code isn't real documentation.****** - -****** Don't believe me? Talk to developers. "Who reads product documentation?" "Oh, the spec? I skimmed it. A few releases back. I think." "I looked in the header file; couldn't see it there." - -******* Or decide not to fix it, which may also be an entirely appropriate decision. - -This article originally appeared on [Alice, Eve, and Bob - a security blog][2] and is republished with permission. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/why-i-love-technical-debt - -作者:[Mike Bursell][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mikecamel -[1]:https://en.wikipedia.org/wiki/Technical_debt -[2]:https://aliceevebob.wordpress.com/2017/08/29/why-i-love-technical-debt/ diff --git a/sources/talk/20171107 How to Monetize an Open Source Project.md b/sources/talk/20171107 How to Monetize an Open Source Project.md deleted file mode 100644 index ab51006101..0000000000 --- a/sources/talk/20171107 How to Monetize an Open Source Project.md +++ /dev/null @@ -1,86 +0,0 @@ -How to Monetize an Open Source Project -====== - -![](http://www.itprotoday.com/sites/itprotoday.com/files/styles/article_featured_standard/public/ThinkstockPhotos-629994230_0.jpg?itok=5dZ68OTn) -The problem for any small group of developers putting the finishing touches on a commercial open source application is figuring out how to monetize the software in order to keep the bills paid and food on the table. Often these small pre-startups will start by deciding which of the recognized open source business models they're going to adapt, whether that be following Red Hat's lead and offering professional services, going the SaaS route, releasing as open core or something else. - -Steven Grandchamp, general manager for MariaDB's North America operations and CEO for Denver-based startup [Drud Tech][1], thinks that might be putting the cart before the horse. With an open source project, the best first move is to get people downloading and using your product for free. - -**Related:** [Demand for Open Source Skills Continues to Grow][2] - -"The number one tangent to monetization in any open source product is adoption, because the key to monetizing an open source product is you flip what I would call the sales funnel upside down," he told ITPro at the recent All Things Open conference in Raleigh, North Carolina. - -In many ways, he said, selling open source solutions is the opposite of marketing traditional proprietary products, where adoption doesn't happen until after a contract is signed. - -**Related:** [Is Raleigh the East Coast's Silicon Valley?][3] - -"In a proprietary software company, you advertise, you market, you make claims about what the product can do, and then you have sales people talk to customers. Maybe you have a free trial or whatever. Maybe you have a small version. Maybe it's time bombed or something like that, but you don't really get to realize the benefit of the product until there's a contract and money changes hands." - -Selling open source solutions is different because of the challenge of selling software that's freely available as a GitHub download. - -"The whole idea is to put the product out there, let people use it, experiment with it, and jump on the chat channels," he said, pointing out that his company Drud has a public chat channel that's open to anybody using their product. "A subset of that group is going to raise their hand and go, 'Hey, we need more help. We'd like a tighter relationship with the company. We'd like to know where your road map's going. We'd like to know about customization. We'd like to know if maybe this thing might be on your road map.'" - -Grandchamp knows more than a little about making software pay, from both the proprietary and open source sides of the fence. In the 1980s he served as VP of research and development at Formation Technologies, and became SVP of R&D at John H. Harland after it acquired Formation in the mid-90s. He joined MariaDB in 2016, after serving eight years as CEO at OpenLogic, which was providing commercial support for more than 600 open-source projects at the time it was acquired by Rogue Wave Software. Along the way, there was a two year stint at Microsoft's Redmond campus. - -OpenLogic was where he discovered open source, and his experiences there are key to his approach for monetizing open source projects. - -"When I got to OpenLogic, I was told that we had 300 customers that were each paying $99 a year for access to our tool," he explained. "But the problem was that nobody was renewing the tool. So I called every single customer that I could find and said 'did you like the tool?'" - -It turned out that nearly everyone he talked to was extremely happy with the company's software, which ironically was the reason they weren't renewing. The company's tool solved their problem so well there was no need to renew. - -"What could we have offered that would have made you renew the tool?" he asked. "They said, 'If you had supported all of the open source products that your tool assembled for me, then I would have that ongoing relationship with you.'" - -Grandchamp immediately grasped the situation, and when the CTO said such support would be impossible, Grandchamp didn't mince words: "Then we don't have a company." - -"We figured out a way to support it," he said. "We created something called the Open Logic Expert Community. We developed relationships with committers and contributors to a couple of hundred open source packages, and we acted as sort of the hub of the SLA for our customers. We had some people on staff, too, who knew the big projects." - -After that successful launch, Grandchamp and his team began hearing from customers that they were confused over exactly what open source code they were using in their projects. That lead to the development of what he says was the first software-as-a-service compliance portal of open source, which could scan an application's code and produce a list of all of the open source code included in the project. When customers then expressed confusion over compliance issues, the SaaS service was expanded to flag potential licensing conflicts. - -Although the product lines were completely different, the same approach was used to monetize MariaDB, then called SkySQL, after MySQL co-founders Michael "Monty" Widenius, David Axmark, and Allan Larsson created the project by forking MySQL, which Oracle had acquired from Sun Microsystems in 2010. - -Again, users were approached and asked what things they would be willing to purchase. - -"They wanted different functionality in the database, and you didn't really understand this if you didn't talk to your customers," Grandchamp explained. "Monty and his team, while they were being acquired at Sun and Oracle, were working on all kinds of new functionality, around cloud deployments, around different ways to do clustering, they were working on lots of different things. That work, Oracle and MySQL didn't really pick up." - -Rolling in the new features customers wanted needed to be handled gingerly, because it was important to the folks at MariaDB to not break compatibility with MySQL. This necessitated a strategy around when the code bases would come together and when they would separate. "That road map, knowledge, influence and technical information was worth paying for." - -As with OpenLogic, MariaDB customers expressed a willingness to spend money on a variety of fronts. For example, a big driver in the early days was a project called Remote DBA, which helped customers make up for a shortage of qualified database administrators. The project could help with design issues, as well as monitor existing systems to take the workload off of a customer's DBA team. The service also offered access to MariaDB's own DBAs, many of whom had a history with the database going back to the early days of MySQL. - -"That was a subscription offering that people were definitely willing to pay for," he said. - -The company also learned, again by asking and listening to customers, that there were various types of support subscriptions that customers were willing to purchase, including subscriptions around capability and functionality, and a managed service component of Remote DBA. - -These days Grandchamp is putting much of his focus on his latest project, Drud, a startup that offers a suite of integrated, automated, open source development tools for developing and managing multiple websites, which can be running on any combination of content management systems and deployment platforms. It is monetized partially through modules that add features like a centralized dashboard and an "intelligence engine." - -As you might imagine, he got it off the ground by talking to customers and giving them what they indicated they'd be willing to purchase. - -"Our number one customer target is the agency market," he said. "The enterprise market is a big target, but I believe it's our second target, not our first. And the reason it's number two is they don't make decisions very fast. There are technology refresh cycles that have to come up, there are lots of politics involved and lots of different vendors. It's lucrative once you're in, but in a startup you've got to figure out how to pay your bills. I want to pay my bills today. I don't want to pay them in three years." - -Drud's focus on the agency market illustrates another consideration: the importance of understanding something about your customers' business. When talking with agencies, many said they were tired of being offered generic software that really didn't match their needs from proprietary vendors that didn't understand their business. In Drud's case, that understanding is built into the company DNA. The software was developed by an agency to fill its own needs. - -"We are a platform designed by an agency for an agency," Grandchamp said. "Right there is a relationship that they're willing to pay for. We know their business." - -Grandchamp noted that startups also need to be able to distinguish users from customers. Most of the people downloading and using commercial open source software aren't the people who have authorization to make purchasing decisions. These users, however, can point to the people who control the purse strings. - -"It's our job to build a way to communicate with those users, provide them value so that they'll give us value," he explained. "It has to be an equal exchange. I give you value of a tool that works, some advice, really good documentation, access to experts who can sort of guide you along. Along the way I'm asking you for pieces of information. Who do you work for? How are the technology decisions happening in your company? Are there other people in your company that we should refer the product to? We have to create the dialog." - -In the end, Grandchamp said, in the open source world the people who go out to find business probably shouldn't see themselves as salespeople, but rather, as problem solvers. - -"I believe that you're not really going to need salespeople in this model. I think you're going to need customer success people. I think you're going to need people who can enable your customers to be successful in a business relationship that's more highly transactional." - -"People don't like to be sold," he added, "especially in open source. The last person they want to see is the sales person, but they like to ply and try and consume and give you input and give you feedback. They love that." - --------------------------------------------------------------------------------- - -via: http://www.itprotoday.com/software-development/how-monetize-open-source-project - -作者:[Christine Hall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itprotoday.com/author/christine-hall -[1]:https://www.drud.com/ -[2]:http://www.itprotoday.com/open-source/demand-open-source-skills-continues-grow -[3]:http://www.itprotoday.com/software-development/raleigh-east-coasts-silicon-valley diff --git a/sources/talk/20171114 Why pair writing helps improve documentation.md b/sources/talk/20171114 Why pair writing helps improve documentation.md deleted file mode 100644 index ff3bbb5888..0000000000 --- a/sources/talk/20171114 Why pair writing helps improve documentation.md +++ /dev/null @@ -1,87 +0,0 @@ -Why pair writing helps improve documentation -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd) - -Professional writers, at least in the Red Hat documentation team, nearly always work on docs alone. But have you tried writing as part of a pair? In this article, I'll explain a few benefits of pair writing. -### What is pair writing? - -Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from [pair programming][1]. - -When pair writing, you and your colleague work on the text together, making suggestions and asking questions as needed. Meanwhile, you're observing each other's work. For example, while one is writing, the other writer observes details such as structure or context. Often discussion around the document turns into sharing experiences and opinions, and brainstorming about writing in general. - -At all times, the writing is done by only one person. Thus, you need only one computer, unless you want one writer to do online research while the other person does the writing. The text workflow is the same as if you are working alone: a text editor, the documentation source files, git, and so on. - -### Pair writing in practice - -My colleague Aneta Steflova and I have done more than 50 hours of pair writing working on the Red Hat Enterprise Linux System Administration docs and on the Red Hat Identity Management docs. I've found that, compared to writing alone, pair writing: - - * is as productive or more productive; - * improves document quality; - * helps writers share technical expertise; and - * is more fun. - - - -### Speed - -Two writers writing one text? Sounds half as productive, right? Wrong. (Usually.) - -Pair writing can help you work faster because two people have solutions to a bigger set of problems, which means getting blocked less often during the process. For example, one time we wrote urgent API docs for identity management. I know at least the basics of web APIs, the REST protocol, and so on, which helped us speed through those parts of the documentation. Working alone, Aneta would have needed to interrupt the writing process frequently to study these topics. - -### Quality - -Poor wording or sentence structure, inconsistencies in material, and so on have a harder time surviving under the scrutiny of four eyes. For example, one of our pair writing documents was reviewed by an extremely critical developer, who was known for catching technical inaccuracies and bad structure. After this particular review, he said, "Perfect. Thanks a lot." - -### Sharing expertise - -Each of us lives in our own writing bubble, and we normally don't know how others approach writing. Pair writing can help you improve your own writing process. For example, Aneta showed me how to better handle assignments in which the developer has provided starting text (as opposed to the writer writing from scratch using their own knowledge of the subject), which I didn't have experience with. Also, she structures the docs thoroughly, which I began doing as well. - -As another example, I'm good enough at Vim that XML editing (e.g., tags manipulation) is enjoyable instead of torturous. Aneta saw how I was using Vim, asked about it, suffered through the learning curve, and now takes advantage of the Vim features that help me. - -Pair writing is especially good for helping and mentoring new writers, and it's a great way to get to know professionally (and have fun with) colleagues. - -### When pair writing shines - -In addition to benefits I've already listed, pair writing is especially good for: - - * **Working with[Bugzilla][2]** : Bugzillas can be cumbersome and cause problems, especially for administration-clumsy people (like me). - * **Reviewing existing documents** : When documentation needs to be expanded or fixed, it is necessary to first examine the existing document. - * **Learning new technology** : A fellow writer can be a better teacher than an engineer. - * **Writing emails/requests for information to developers with well-chosen questions** : The difficulty of this task rises in proportion to the difficulty of technology you are documenting. - - - -Also, with pair writing, feedback is in real time, as-needed, and two-way. - -On the downside, pair writing can be a faster pace, giving a writer less time to mull over a topic or wording. On the other hand, generally peer review is not necessary after pair writing. - -### Words of caution - -To get the most out of pair writing: - - * Go into the project well prepared, otherwise you can waste your colleague's time. - * Talkative types need to stay focused on the task, otherwise they end up talking rather than writing. - * Be prepared for direct feedback. Pair writing is not for feedback-allergic writers. - * Beware of session hijackers. Dominant personalities can turn pair writing into writing solo with a spectator. (However, it _can _ be good if one person takes over at times, as long as the less-experienced partner learns from the hijacker, or the more-experienced writer is providing feedback to the hijacker.) - - - -### Conclusion - -Pair writing is a meeting, but one in which you actually get work done. It's an activity that lets writers focus on the one indispensable thing in our vocation--writing. - -_This post was written with the help of pair writing with Aneta Steflova._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/11/try-pair-writing - -作者:[Maxim Svistunov][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/maxim-svistunov -[1]:https://developer.atlassian.com/blog/2015/05/try-pair-programming/ -[2]:https://www.bugzilla.org/ diff --git a/sources/talk/20171115 Why and How to Set an Open Source Strategy.md b/sources/talk/20171115 Why and How to Set an Open Source Strategy.md deleted file mode 100644 index 79ec071b4d..0000000000 --- a/sources/talk/20171115 Why and How to Set an Open Source Strategy.md +++ /dev/null @@ -1,120 +0,0 @@ -Why and How to Set an Open Source Strategy -============================================================ - -![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg) - -This article explains how to walk through, measure, and define strategies collaboratively in an open source community. - - _“If you don’t know where you are going, you’ll end up someplace else.” _ _—_  Yogi Berra - -Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand. - -Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge — how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community. - -Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isn’t a tool to justify unfortunate actions but to help show focus and where each community member can contribute. - -A good application of strategy achieves the following: - -* Why the project exists? - -* What the project looks to achieve? - -* What is the ideal end state for a project is. - -The key to success is answering these questions as simply as possible, with consensus from your community. Let’s look at some ways to do this. - -### Setting a mission and vision - - _“_ _Efforts and courage are not enough without purpose and direction.”_  — John F. Kennedy - -All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are  _Mission_  and  _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision). - -A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions: - -* What drove you to join and/or contribute the project? - -* How do you define success for your participation? - -In a company, you’d ask your customers these questions usually. But in open source projects, the customers are the project participants — and their time investment is what makes the project a success. - -Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each member’s answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights. - -Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Here’s their mission: - - _Build community and adoption of Open Source on the mainframe by:_ - -* _Eliminating barriers to Open Source adoption on the mainframe_ - -* _Demonstrating value of the mainframe on technical and business levels_ - -* _Strengthening collaboration points and resources for the community to thrive_ - -At 40 words, it passes the key eye tests of a good mission statement; it’s clear, concise, and demonstrates the useful value the project aims for. - -The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well: - - _Linux on the Mainframe as the standard for enterprise class systems and applications._ - -You could read that as a [BHAG][1], but it’s really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision — it’s only 13 words, inspirational, clear, memorable, and concise. - -Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there? - -### Goals, Objectives, Actions, and Results - - _“I don’t focus on what I’m up against. I focus on my goals and I try to ignore the rest.”_  — Venus Williams - -Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off. - -There are lots of methods of time bound planning, but the method I think works the best for projects is what I’ve dubbed the GOAR method. It’s an acronym that stands for: - -* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction. - -* Objectives show how you measure a goal’s completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.” - -* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors. - -* Results come along the way, showing progress both positive and negative from the actions. - -You can put these into a table like this: - -| Goals | Objectives | Actions | Results | -|:--|:--|:--|:--| -| Grow a diverse contributor base     | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | | -| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | | - - -In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along. - -The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project — this person could help introduce those developers to the codebase and guide them through their first commit. - -### What happens if the project doesn’t hit the goals? - - _“I have not failed. I’ve just found 10,000 ways that won’t work.”_  — Thomas A. Edison - -Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not! - -Instead, you can use this experience as a way to better understand your project’s velocity, its impact, and its community, and perhaps as a way to prioritize what is important and what’s not. - --------------------------------------------------------------------------------- - -via: https://www.linuxfoundation.org/blog/set-open-source-strategy/ - -作者:[ John Mertic][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxfoundation.org/author/jmertic/ -[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal -[2]:https://www.linuxfoundation.org/author/jmertic/ -[3]:https://www.linuxfoundation.org/category/blog/ -[4]:https://www.linuxfoundation.org/category/audience/c-level/ -[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/ -[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/ -[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/ -[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/ -[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/ -[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/ -[11]:https://www.linuxfoundation.org/category/audience/open-source-users/ -[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/ diff --git a/sources/talk/20171116 Why is collaboration so difficult.md b/sources/talk/20171116 Why is collaboration so difficult.md deleted file mode 100644 index 6567b75dca..0000000000 --- a/sources/talk/20171116 Why is collaboration so difficult.md +++ /dev/null @@ -1,94 +0,0 @@ -Why is collaboration so difficult? -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_block_collaboration.png?itok=pKbXpr1e) - -Many contemporary definitions of "collaboration" define it simply as "working together"--and, in part, it is working together. But too often, we tend to use the term "collaboration" interchangeably with cognate terms like "cooperation" and "coordination." These terms also refer to some manner of "working together," yet there are subtle but important differences between them all. - -How does collaboration differ from coordination or cooperation? What is so important about collaboration specifically? Does it have or do something that coordination and cooperation don't? The short answer is a resounding "yes!" - -[This unit explores collaboration][1], a problematic term because it has become a simple buzzword for "working together." By the time you've studied the cases and practiced the exercises contained in this section, you will understand that it's so much more than that. - -### Not like the others - -"Coordination" can be defined as the ordering of a variety of people acting in an effective, unified manner toward an end goal or state - -In traditional organizations and businesses, people contributed according to their role definitions, such as in manufacturing, where each employee was responsible for adding specific components to the widget on an assembly line until the widget was complete. In contexts like these, employees weren't expected to contribute beyond their pre-defined roles (they were probably discouraged from doing so), and they didn't necessarily have a voice in the work or in what was being created. Often, a manager oversaw the unification of effort (hence the role "project coordinator"). Coordination is meant to connote a sense of harmony and unity, as if elements are meant to go together, resulting in efficiency among the ordering of the elements. - -One common assumption is that coordinated efforts are aimed at the same, single goal. So some end result is "successful" when people and parts work together seamlessly; when one of the parts breaks down and fails, then the whole goal fails. Many traditional businesses (for instance, those with command-and-control hierarchies) manage work through coordination. - -Cooperation is another term whose surface meaning is "working together." Rather than the sense of compliance that is part of "coordination," it carries a sense of agreement and helpfulness on the path toward completing a shared activity or goal. - -"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating. - -People tend to use the term "cooperation" when joining two semi-related entities where one or more entity could decide not to cooperate. The people and pieces that are part of a cooperative effort make the shared activity easier to perform or the shared goal easier to reach. "Cooperation" implies a shared goal or activity we agree to pursue jointly. One example is how police and witnesses cooperate to solve crimes. - -"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating. - -Sometimes collaboration involves two or more groups that do not normally work together; they are disparate groups or not usually connected. For instance, a traitor collaborates with the enemy, or rival businesses collaborate with each other. The subtlety of collaboration is that the two groups may have oppositional initial goals but work together to create a shared goal. Collaboration can be more contentious than coordination or cooperation, but like cooperation, any one of the entities could choose not to collaborate. Despite the contention and conflict, however, there is discourse--whether in the form of multi-way discussion or one-way feedback--because without discourse, there is no way for people to express a point of dissent that is ripe for negotiation. - -The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals. - -### For example - -One way to think about these things is through a real-life example--like the writing of [this book][1]. - -The editor, [Bryan][2], coordinates the authors' work through the call for proposals, setting dates and deadlines, collecting the writing, and meeting editing dates and deadlines for feedback about our work. He coordinates the authors, the writing, the communications. In this example, I'm not coordinating anything except myself (still a challenge most days!). - -The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals. - -I cooperate with Bryan's dates and deadlines, and with the ways he has decided to coordinate the work. I propose the introduction on GitHub; I wait for approval. I comply with instructions, write some stuff, and send it to him by the deadlines. He cooperates by accepting a variety of document formats. I get his edits,incorporate them, send it back him, and so forth. If I don't cooperate (or something comes up and I can't cooperate), then maybe someone else writes this introduction instead. - -Bryan and I collaborate when either one of us challenges something, including pieces of the work or process that aren't clear, things that we thought we agreed to, or things on which we have differing opinions. These intersections are ripe for negotiation and therefore indicative of collaboration. They are the opening for us to negotiate some creative work. - -Once the collaboration is negotiated and settled, writing and editing the book returns to cooperation/coordination; that is why collaboration relies on the other two terms of joint work. - -One of the most interesting parts of this example (and of work and shared activity in general) is the moment-by-moment pivot from any of these terms to the other. The writing of this book is not completely collaborative, coordinated, or cooperative. It's a messy mix of all three. - -### Why is collaboration important? - -Collaboration is an important facet of contemporary organizations--specifically those oriented toward knowledge work--because it allows for productive disagreement between actors. That kind of disagreement then helps increase the level of engagement and provide meaning to the group's work. - -In his book, The Age of Discontinuity: Guidelines to our Changing Society, [Peter Drucker discusses][3] the "knowledge worker" and the pivot from work based on experience (e.g. apprenticeships) to work based on knowledge and the application of knowledge. This change in work and workers, he writes: - -> ...will make the management of knowledge workers increasingly crucial to the performance and achievement of the knowledge society. We will have to learn to manage the knowledge worker both for productivity and for satisfaction, both for achievement and for status. We will have to learn to give the knowledge worker a job big enough to challenge him, and to permit performance as a "professional." - -In other words, knowledge workers aren't satisfied with being subordinate--told what to do by managers as, if there is one right way to do a task. And, unlike past workers, they expect more from their work lives, including some level of emotional fulfillment or meaning-making from their work. The knowledge worker, according to Drucker, is educated toward continual learning, "paid for applying his knowledge, exercising his judgment, and taking responsible leadership." So it then follows that knowledge workers expect from work the chance to apply and share their knowledge, develop themselves professionally, and continuously augment their knowledge. - -Interesting to note is the fact that Peter Drucker wrote about those concepts in 1969, nearly 50 years ago--virtually predicting the societal and organizational changes that would reveal themselves, in part, through the development of knowledge sharing tools such as forums, bulletin boards, online communities, and cloud knowledge sharing like DropBox and GoogleDrive as well as the creation of social media tools such as MySpace, Facebook, Twitter, YouTube and countless others. All of these have some basis in the idea that knowledge is something to liberate and share. - -In this light, one might view the open organization as one successful manifestation of a system of management for knowledge workers. In other words, open organizations are a way to manage knowledge workers by meeting the needs of the organization and knowledge workers (whether employees, customers, or the public) simultaneously. The foundational values this book explores are the scaffolding for the management of knowledge, and they apply to ways we can: - - * make sure there's a lot of varied knowledge around (inclusivity) - * help people come together and participate (community) - * circulate information, knowledge, and decision making (transparency) - * innovate and not become entrenched in old ways of thinking and being (adaptability) - * develop a shared goal and work together to use knowledge (collaboration) - - - -Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. As we've discovered, collaboration is more than working together with some degree of compliance; in fact, it describes a type of working together that overcomes compliance because people can disagree, question, and express their needs in a negotiation and in collaboration. And, collaboration is more than "working toward a shared goal"; collaboration is a process which defines the shared goals via negotiation and, when successful, leads to cooperation and coordination to focus activity on the negotiated outcome. - -Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. - -Collaboration works best when the other four open organization values are present. For instance, when people are transparent, there is no guessing about what is needed, why, by whom, or when. Also, because collaboration involves negotiation, it also needs diversity (a product of inclusivity); after all, if we aren't negotiating among differing views, needs, or goals, then what are we negotiating? During a negotiation, the parties are often asked to give something up so that all may gain, so we have to be adaptable and flexible to the different outcomes that negotiation can provide. Lastly, collaboration is often an ongoing process rather than one which is quickly done and over, so it's best to enter collaboration as if you are part of the same community, desiring everyone to benefit from the negotiation. In this way, acts of authentic and purposeful collaboration directly necessitate the emergence of the other four values--transparency, inclusivity, adaptability, and community--as they assemble part of the organization's collective purpose spontaneously. - -### Collaboration in open organizations - -Traditional organizations advance an agreed-upon set of goals that people are welcome to support or not. In these organizations, there is some amount of discourse and negotiation, but often a higher-ranking or more powerful member of the organization intervenes to make a decision, which the membership must accept (and sometimes ignores). In open organizations, however, the focus is for members to perform their activity and to work out their differences; only if necessary would someone get involved (and even then would try to do it in the most minimal way that support the shared values of community, transparency, adaptability, collaboration and inclusivity.) This make the collaborative processes in open organizations "messier" (or "chaotic" to use Jim Whitehurst's term) but more participatory and, hopefully, innovative. - -This article is part of the [Open Organization Workbook project][1]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/17/11/what-is-collaboration - -作者:[Heidi Hess Von Ludewig][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/heidi-hess-von-ludewig -[1]:https://opensource.com/open-organization/17/8/workbook-project-announcement -[2]:http://opensource.com/users/bbehrens -[3]:https://www.elsevier.com/books/the-age-of-discontinuity/drucker/978-0-434-90395-5 diff --git a/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md b/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md deleted file mode 100644 index d68bab55bf..0000000000 --- a/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md +++ /dev/null @@ -1,95 +0,0 @@ -Changing how we use Slack solved our transparency and silo problems -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_abstract_pieces.jpg?itok=tGR1d2MU) - -Collaboration and information silos are a reality in most organizations today. People tend to regard them as huge barriers to innovation and organizational efficiency. They're also a favorite target for solutions from software tool vendors of all types. - -Tools by themselves, however, are seldom (if ever), the answer to a problem like organizational silos. The reason for this is simple: Silos are made of people, and human dynamics are key drivers for the existence of silos in the first place. - -So what is the answer? - -Successful communities are the key to breaking down silos. Tools play an important role in the process, but if you don't build successful communities around those tools, then you'll face an uphill battle with limited chances for success. Tools enable communities; they do not build them. This takes a thoughtful approach--one that looks at culture first, process second, and tools last. - -Successful communities are the key to breaking down silos. - -However, this is a challenge because, in most cases, this is not the way the process works in most businesses. Too many companies begin their journey to fix silos by thinking about tools first and considering metrics that don't evaluate the right factors for success. Too often, people choose tools for purely cost-based, compliance-based, or effort-based reasons--instead of factoring in the needs and desires of the user base. But subjective measures like "customer/user delight" are a real factor for these internal tools, and can make or break the success of both the tool adoption and the goal of increased collaboration. - -It's critical to understand the best technical tool (or what the business may consider the most cost-effective) is not always the solution that drives community, transparency, and collaboration forward. There is a reason that "Shadow IT"--users choosing their own tool solution, building community and critical mass around them--exists and is so effective: People who choose their own tools are more likely to stay engaged and bring others with them, breaking down silos organically. - -This is a story of how Autodesk ended up adopting Slack at enterprise scale to help solve our transparency and silo problems. Interestingly, Slack wasn't (and isn't) an IT-supported application at Autodesk. It's an enterprise solution that was adopted, built, and is still run by a group of passionate volunteers who are committed to a "default to open" paradigm. - -Utilizing Slack makes transparency happen for us. - -### Chat-tastrophe - -First, some perspective: My job at Autodesk is running our [Open@ADSK][1] initiative. I was originally hired to drive our open source strategy, but we quickly expanded my role to include driving open source best practices for internal development (inner source), and transforming how we collaborate internally as an organization. This last piece is where we pick up our story of Slack adoption in the company. - -But before we even begin to talk about our journey with Slack, let's address why lack of transparency and openness was a challenge for us. What is it that makes transparency such a desirable quality in organizations, and what was I facing when I started at Autodesk? - -Every company says they want "better collaboration." In our case, we are a 35-year-old software company that has been immensely successful at selling desktop "shrink-wrapped" software to several industries, including architecture, engineering, construction, manufacturing, and entertainment. But no successful company rests on its laurels, and Autodesk leadership recognized that a move to Cloud-based solutions for our products was key to the future growth of the company, including opening up new markets through product combinations that required Cloud computing and deep product integrations. - -The challenge in making this move was far more than just technical or architectural--it was rooted in the DNA of the company, in everything from how we were organized to how we integrated our products. The basic format of integration in our desktop products was file import/export. While this is undoubtedly important, it led to a culture of highly-specialized teams working in an environment that's more siloed than we'd like and not sharing information (or code). Prior to the move to a cloud-based approach, this wasn't as a much of a problem--but, in an environment that requires organizations to behave more like open source projects do, transparency, openness, and collaboration go from "nice-to-have" to "business critical." - -Like many companies our size, Autodesk has had many different collaboration solutions through the years, some of them commercial, and many of them home-grown. However, none of them effectively solved the many-to-many real-time collaboration challenge. Some reasons for this were technical, but many of them were cultural. - -I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last." - -When someone first tasked me with trying to find a solution for this, I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last." This is still a challenge for engineering folks like myself. We want to jump immediately to tools as the solution to any problem. However, it's critical to evaluate a company's ethos (culture), as well as existing processes to determine what kinds of tools might be a good fit. Unfortunately, I've seen too many cases where leaders have dictated a tool choice from above, based on the factors discussed earlier. I needed a different approach that relied more on fitting a tool into the culture we wanted to become, not the other way around. - -What I found at Autodesk were several small camps of people using tools like HipChat, IRC, Microsoft Lync, and others, to try to meet their needs. However, the most interesting thing I found was 85 separate instances of Slack in the company! - -Eureka! I'd stumbled onto a viral success (one enabled by Slack's ability to easily spin up "free" instances). I'd also landed squarely in what I like to call "silo-land." - -All of those instances were not talking to each other--so, effectively, we'd created isolated islands of information that, while useful to those in them, couldn't transform the way we operated as an enterprise. Essentially, our existing organizational culture was recreated in digital format in these separate Slack systems. Our organization housed a mix of these small, free instances, as well as multiple paid instances, which also meant we were not taking advantage of a common billing arrangement. - -My first (open source) thought was: "Hey, why aren't we using IRC, or some other open source tool, for this?" I quickly realized that didn't matter, as our open source engineers weren't the only people using Slack. People from all areas of the company--even senior leadership--were adopting Slack in droves, and, in some cases, convincing their management to pay for it! - -My second (engineering) thought was: "Oh, this is simple. We just collapse all 85 of those instances into a single cohesive Slack instance." What soon became obvious was that was the easy part of the solution. Much harder was the work of cajoling, convincing, and moving people to a single, transparent instance. Building in the "guard rails" to enable a closed source tool to provide this transparency was key. These guard rails came in the form of processes, guidelines, and community norms that were the hardest part of this transformation. - -### The real work begins - -As I began to slowly help users migrate to the common instance (paying for it was also a challenge, but a topic for another day), I discovered a dedicated group of power users who were helping each other in the #adsk-slack-help channel on our new common instance of Slack. These power users were, in effect, building the roots of our transparency and community through their efforts. - -The open source community manager in me quickly realized these users were the path to successfully scaling Slack at Autodesk. I enlisted five of them to help me, and, together we set about fabricating the community structure for the tool's rollout. - -We did, however, learn an important lesson about transparency and company culture along the way. - -Here I should note the distinction between a community structure/governance model and traditional IT policies: With the exception of security and data privacy/legal policies, volunteer admins and user community members completely define and govern our Slack instance. One of the keys to our success with Slack (currently approximately 9,100 users and roughly 4,300 public channels) was how we engaged and involved our users in building these governance structures. Things like channel naming conventions and our growing list of frequently asked questions were organic and have continued in that same vein. Our community members feel like their voices are heard (even if some disagree), and that they have been a part of the success of our deployment of Slack. - -We did, however, learn an important lesson about transparency and company culture along the way. - -### It's not the tool - -When we first launched our main Slack instance, we left the ability for anyone to make a channel private turned on. After about three months of usage, we saw a clear trend: More people were creating private channels (and messages) than they were public channels (the ratio was about two to one, private versus public). Since our effort to merge 85 Slack instances was intended to increase participation and transparency, we quickly adjusted our policy and turned off this feature for regular users. We instead implemented a policy of review by the admin team, with clear criteria (finance, legal, personnel discussions among the reasons) defined for private channels. - -This was probably the only time in this entire process that I regretted something. - -We took an amazing amount of flak for this decision because we were dealing with a corporate culture that was used to working in independent units that had minimal interaction with each other. Our defining moment of clarity (and the tipping point where things started to get better) occurred in an all-hands meeting when one of our senior executives asked me to address a question about Slack. I stood up to answer the question, and said (paraphrased from memory): "It's not about the tool. I could give you all the best, gold-plated collaboration platform in existence, but we aren't going to be successful if we don't change our approach to collaboration and learn to default to open." - -I didn't think anything more about that statement--until that senior executive starting using the phrase "default to open" in his slide decks, in his staff meetings, and with everyone he met. That one moment has defined what we have been trying to do with Slack: The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise. - -### What we learned - -The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise. - -I say all the time that this could have happened with other, similar tools (Hipchat, IRC, etc), but it works in this case specifically because we chose an approach of supporting a solution that the user community adopted for their needs, not strictly what the company may have chosen if the decision was coming from the top of the organizational chart. We put a lot of work into making it an acceptable solution (from the perspectives of security, legal, finance, etc.) for the company, but, ultimately, our success has come from the fact that we built this rollout (and continue to run the tool) as a community, not as a traditional corporate IT system. - -The most important lesson I learned through all of this is that transparency and community are evolutionary, not revolutionary. You have to understand where your culture is, where you want it to go, and utilize the lever points that the community is adopting itself to make sustained and significant progress. There is a fine balance point between an anarchy, and a thriving community, and we've tried to model our approach on the successful practices of today's thriving open source communities. - -Communities are personal. Tools come and go, but keeping your community at the forefront of your push to transparency is the key to success. - -This article is part of the [Open Organization Workbook project][2]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/17/12/chat-platform-default-to-open - -作者:[Guy Martin][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/guyma -[1]:mailto:Open@ADSK -[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md b/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md deleted file mode 100644 index 9e35e0ede7..0000000000 --- a/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md +++ /dev/null @@ -1,116 +0,0 @@ -How Mycroft used WordPress and GitHub to improve its documentation -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd) - -Image credits : Photo by Unsplash; modified by Rikki Endsley. CC BY-SA 4.0 - -Imagine you've just joined a new technology company, and one of the first tasks you're assigned is to improve and centralize the organization's developer-facing documentation. There's just one catch: That documentation exists in many different places, across several platforms, and differs markedly in accuracy, currency, and style. - -So how did we tackle this challenge? - -### Understanding the scope - -As with any project, we first needed to understand the scope and bounds of the problem we were trying to solve. What documentation was good? What was working? What wasn't? How much documentation was there? What format was it in? We needed to do a **documentation audit**. Luckily, [Aneta Šteflova][1] had recently [published an article on OpenSource.com][2] about this, and it provided excellent guidance. - -![mycroft doc audit][4] - -Mycroft documentation audit, showing source, topic, medium, currency, quality and audience - -Next, every piece of publicly facing documentation was assessed for the topic it covered, the medium it used, currency, and quality. A pattern quickly emerged that different platforms had major deficiencies, allowing us to make a data-driven approach to decommission our existing Jekyll-based sites. The audit also highlighted just how fragmented our documentation sources were--we had developer-facing documentation across no fewer than seven sites. Although search engines were finding this content just fine, the fragmentation made it difficult for developers and users of Mycroft--our primary audiences--to navigate the information they needed. Again, this data helped us make the decision to centralize our documentation on to one platform. - -### Choosing a central platform - -As an organization, we wanted to constrain the number of standalone platforms in use. Over time, maintenance and upkeep of multiple platforms and integration touchpoints becomes cumbersome for any organization, but this is exacerbated for a small startup. - -One of the other business drivers in platform choice was that we had two primary but very different audiences. On one hand, we had highly technical developers who we were expecting would push documentation to its limits--and who would want to contribute to technical documentation using their tools of choice--[Git][5], [GitHub][6], and [Markdown][7]. Our second audience--end users--would primarily consume technical documentation and would want to do so in an inviting, welcoming platform that was visually appealing and provided additional features such as the ability to identify reading time and to provide feedback. The ability to capture feedback was also a key requirement from our side as without feedback on the quality of the documentation, we would not have a solid basis to undertake continuous quality improvement. - -Would we be able to identify one platform that met all of these competing needs? - -We realised that two platforms covered all of our needs: - - * [WordPress][8]: Our existing website is built on WordPress, and we have some reasonably robust WordPress skills in-house. The flexibility of WordPress also fulfilled our requirements for functionality like reading time and the ability to capture user feedback. - * [GitHub][9]: Almost [all of Mycroft.AI's source code is available on GitHub][10], and our development team uses this platform daily. - - - -But how could we marry the two? - - -![](https://opensource.com/sites/default/files/images/life-uploads/wordpress-github-sync.png) - -### Integrating WordPress and GitHub with WordPress GitHub Sync - -Luckily, our COO, [Nate Tomasi][11], spotted a WordPress plugin that promised to integrate the two. - -This was put through its paces on our test website, and it passed with flying colors. It was easy to install, had a straightforward configuration, which just required an OAuth token and webhook with GitHub, and provided two-way integration between WordPress and GitHub. - -It did, however, have a dependency--on Markdown--which proved a little harder to implement. We trialed several Markdown plugins, but each had several quirks that interfered with the rendering of non-Markdown-based content. After several days of frustration, and even an attempt to custom-write a plugin for our needs, we stumbled across [Parsedown Party][12]. There was much partying! With WordPress GitHub Sync and Parsedown Party, we had integrated our two key platforms. - -Now it was time to make our content visually appealing and usable for our user audience. - -### Reading time and feedback - -To implement the reading time and feedback functionality, we built a new [page template for WordPress][13], and leveraged plugins within the page template. - -Knowing the estimated reading time of an article in advance has been [proven to increase engagement with content][14] and provides developers and users with the ability to decide whether to read the content now or bookmark it for later. We tested several WordPress plugins for reading time, but settled on [Reading Time WP][15] because it was highly configurable and could be easily embedded into WordPress page templates. Our decision to place Reading Time at the top of the content was designed to give the user the choice of whether to read now or save for later. With Reading Time in place, we then turned our attention to gathering user feedback and ratings for our documentation. - -![](https://opensource.com/sites/default/files/images/life-uploads/screenshot-from-2017-12-08-00-55-31.png) - -There are several rating and feedback plugins available for WordPress. We needed one that could be easily customized for several use cases, and that could aggregate or summarize ratings. After some experimentation, we settled on [Multi Rating Pro][16] because of its wide feature set, especially the ability to create a Review Ratings page in WordPress--i.e., a central page where staff can review ratings without having to be logged in to the WordPress backend. The only gap we ran into here was the ability to set the display order of rating options--but it will likely be added in a future release. - -The WordPress GitHub Integration plugin also gave us the ability to link back to the GitHub repository where the original Markdown content was held, inviting technical developers to contribute to improving our documentation. - -### Updating the existing documentation - -Now that the "container" for our new documentation had been developed, it was time to update the existing content. Because much of our documentation had grown organically over time, there were no style guidelines to shape how keywords and code were styled. This was tackled first, so that it could be applied to all content. [You can see our content style guidelines on GitHub.][17] - -As part of the update, we also ran several checks to ensure that the content was technically accurate, augmenting the existing documentation with several images for better readability. - -There were also a couple of additional tools that made creating internal links for documentation pieces easier. First, we installed the [WP Anchor Header][18] plugin. This plugin provided a small but important function: adding `id` content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes to each `

`, `

` (and so on) element. This meant that internal anchors could be automatically generated on the command line from the Markdown content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes generated by WP Anchor Header. - -Next, we imported the updated documentation into WordPress from GitHub, and made sure we had meaningful and easy-to-search on slugs, descriptions, and keywords--because what good is excellent documentation if no one can find it?! A final activity was implementing redirects so that people hitting the old documentation would be taken to the new version. - -### What next? - -[Please do take a moment and have a read through our new documentation][20]. We know it isn't perfect--far from it--but we're confident that the mechanisms we've baked into our new documentation infrastructure will make it easier to identify gaps--and resolve them quickly. If you'd like to know more, or have suggestions for our documentation, please reach out to Kathy Reid on [Chat][21] (@kathy-mycroft) or via [email][22]. - -_Reprinted with permission from[Mycroft.ai][23]._ - -### About the author -Kathy Reid - Director of Developer Relations @MycroftAI, President of @linuxaustralia. Kathy Reid has expertise in open source technology management, web development, video conferencing, digital signage, technical communities and documentation. She has worked in a number of technical and leadership roles over the last 20 years, and holds Arts and Science undergraduate degrees... more about Kathy Reid - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/rocking-docs-mycroft - -作者:[Kathy Reid][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/kathyreid -[1]:https://opensource.com/users/aneta -[2]:https://opensource.com/article/17/10/doc-audits -[3]:/file/382466 -[4]:https://opensource.com/sites/default/files/images/life-uploads/mycroft-documentation-audit.png (mycroft documentation audit) -[5]:https://git-scm.com/ -[6]:https://github.com/MycroftAI -[7]:https://en.wikipedia.org/wiki/Markdown -[8]:https://www.wordpress.org/ -[9]:https://github.com/ -[10]:https://github.com/mycroftai -[11]:http://mycroft.ai/team/ -[12]:https://wordpress.org/plugins/parsedown-party/ -[13]:https://developer.wordpress.org/themes/template-files-section/page-template-files/ -[14]:https://marketingland.com/estimated-reading-times-increase-engagement-79830 -[15]:https://jasonyingling.me/reading-time-wp/ -[16]:https://multiratingpro.com/ -[17]:https://github.com/MycroftAI/docs-rewrite/blob/master/README.md -[18]:https://wordpress.org/plugins/wp-anchor-header/ -[19]:https://github.com/jonschlinkert/markdown-toc -[20]:https://mycroft.ai/documentation -[21]:https://chat.mycroft.ai/ -[22]:mailto:kathy.reid@mycroft.ai -[23]:https://mycroft.ai/blog/improving-mycrofts-documentation/ diff --git a/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md b/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md deleted file mode 100644 index 272c1b03ae..0000000000 --- a/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md +++ /dev/null @@ -1,121 +0,0 @@ -The open organization and inner sourcing movements can share knowledge -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL) -Image by : opensource.com - -Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible. - -Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed. - -In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services. - -### About inner source - -Before I explain how inner source helped our team, let me offer some background on the concept. - -Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls. - -In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery. - -For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency. - -Inner source aligns well with open organization strategies and principles. - -Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration. - -### The challenge - -Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized. - -In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools. - -The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated." - -Among the issues Template Util exhibited were: - - * Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems. - * The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug. - * The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access. - * There was no standard process to contributing or accepting changes. - * The documentation was poor. - - - -As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed. - -The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes. - -### Crisis point - -By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point. - -Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution. - -During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review. - -While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration. - -When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.) - -This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services. - -So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves. - -### Returning to openness - -This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we: - - 1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"), - 2. created an open issue log for the project to avoid solving the same problem repeatedly, - 3. opened our code base so anyone in Red Hat could read or clone it, and - 4. made it easy for engineers to get trusted committer access - - - -Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways. - -These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular: - - * We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality. - * We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted. - * We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them. - - - -With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing. - -### Why it matters - -Why does inner source matter? - -From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements: - - * Contributions to our tooling have grown 72% per week (by number of commits). - * The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development. - * The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community. - - - -And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery. - -What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities. - -This article is part of the [Open Organization Workbook project][2]. - -### about the author -Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,... - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it - -作者:[Tom Benninger][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/tomben -[1]:https://opensource.com/open-organization/resources/open-org-maturity-model -[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/sources/talk/20180112 in which the cost of structured data is reduced.md b/sources/talk/20180112 in which the cost of structured data is reduced.md deleted file mode 100644 index 992ad57a39..0000000000 --- a/sources/talk/20180112 in which the cost of structured data is reduced.md +++ /dev/null @@ -1,181 +0,0 @@ -in which the cost of structured data is reduced -====== -Last year I got the wonderful opportunity to attend [RacketCon][1] as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions. - -![lensmen chronicles][2] - -I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.) - -The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected. - -### GUIs and XML - -I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display: -``` -(define (main path) - (let ([frame (new frame% [label "World color"])] - [categorizations (box '())] - [doc (call-with-input-file path read-xml/document)]) - (new (class canvas% - (define/override (on-char event) - (handle-key this categorizations (send event get-key-code))) - (super-new)) - [parent frame] - [paint-callback (draw doc categorizations)]) - (send frame show #t))) - -``` - -While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of [generic interfaces][3] in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a [`box`][4] which you use in the way you'd use a `ref` in ML or Clojure: a mutable wrapper around an immutable data structure. - -The world map I'm using is [an SVG of the Robinson projection][5] from Wikipedia. If you look closely there's a call to bind `doc` that calls [`call-with-input-file`][6] with [`read-xml/document`][7] which loads up the whole map file's SVG; just about as easily as you could ask for. - -The data you get back from `read-xml/document` is in fact a [document][8] struct, which contains an `element` struct containing `attribute` structs and lists of more `element` structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong. - -Here's how we handle keyboard input; we're displaying a map with one country highlighted, and `key` here tells us what the user pressed to categorize the highlighted country. If that key is in the `categories` hash then we put it into `categorizations`. -``` -(define categories #hash((select . "eeeeff") - (#\1 . "993322") - (#\2 . "229911") - (#\3 . "ABCD31") - (#\4 . "91FF55") - (#\5 . "2439DF"))) - -(define (handle-key canvas categorizations key) - (cond [(equal? #\backspace key) (swap! categorizations cdr)] - [(member key (dict-keys categories)) (swap! categorizations (curry cons key))] - [(equal? #\space key) (display (unbox categorizations))]) - (send canvas refresh)) - -``` - -### Nested updates: the bad parts - -Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a [`fold`][9] reduction over the XML document struct and the list of country categorizations (plus `'select` for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to [`draw-pict`][10]: -``` - -(define (update original-doc categorizations) - (for/fold ([doc original-doc]) - ([category (cons 'select (unbox categorizations))] - [n (in-range (length (unbox categorizations)) 0 -1)]) - (set-style doc n (style-for category)))) - -(define ((draw doc categorizations) _ context) - (let* ([newdoc (update doc categorizations)] - [xml (call-with-output-string (curry write-xml newdoc))]) - (draw-pict (call-with-input-string xml svg-port->pict) context 0 0))) - -``` - -The problem is in that pesky `set-style` function. All it has to do is reach deep down into the `document` struct to find the `n`th `path` element (the one associated with a given country), and change its `'style` attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple: -``` - -(define (set-style doc n new-style) - (let* ([root (document-element doc)] - [g (list-ref (element-content root) 8)] - [paths (element-content g)] - [path (first (drop (filter element? paths) n))] - [path-num (list-index (curry eq? path) paths)] - [style-index (list-index (lambda (x) (eq? 'style (attribute-name x))) - (element-attributes path))] - [attr (list-ref (element-attributes path) style-index)] - [new-attr (make-attribute (source-start attr) - (source-stop attr) - (attribute-name attr) - new-style)] - [new-path (make-element (source-start path) - (source-stop path) - (element-name path) - (list-set (element-attributes path) - style-index new-attr) - (element-content path))] - [new-g (make-element (source-start g) - (source-stop g) - (element-name g) - (element-attributes g) - (list-set paths path-num new-path))] - [root-contents (list-set (element-content root) 8 new-g)]) - (make-document (document-prolog doc) - (make-element (source-start root) - (source-stop root) - (element-name root) - (element-attributes root) - root-contents) - (document-misc doc)))) - -``` - -The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field `x` replaced by the value of `(f (lookup x))`". Racket can [do this with dictionaries][11] but not with structs2. If you want a modified version you have to create a fresh one3. - -### Lenses to the rescue? - -![first lensman][12] - -When I brought this up in the `#racket` channel on Freenode, I was helpfully pointed to the 3rd-party [Lens][13] library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's [a flaw][14] preventing them from working with `xml` structs, so it seemed I was out of luck. - -But then I was pointed to [X-expressions][15] as an alternative to structs. The [`xml->xexpr`][16] function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue. - -For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the `n`th country and its `style` attribute. The [`lens-compose`][17] function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way [`compose`][18] works for functions). Also note that defining one lens gives us the ability to both get nested values (with [`lens-view`][19]) and update them. -``` -(define (style-lens n) - (lens-compose (dict-ref-lens 'style) - second-lens - (list-ref-lens (add1 (* n 2))) - (list-ref-lens 10))) -``` - -Our `` XML elements are under the 10th item of the root xexpr, (hence the [`list-ref-lens`][20] with 10) and they are interspersed with whitespace, so we have to double `n` to find the `` we want. The [`second-lens`][21] call gets us to that element's attribute alist, and [`dict-ref-lens`][22] lets us zoom in on the `'style` key out of that alist. - -Once we have our lens, it's just a matter of replacing `set-style` with a call to [`lens-set`][23] in our `update` function we had above, and then we're off: -``` -(define (update doc categorizations) - (for/fold ([d doc]) - ([category (cons 'select (unbox categorizations))] - [n (in-range (length (unbox categorizations)) 0 -1)]) - (lens-set (style-lens n) d (list (style-for category))))) -``` - -![second stage lensman][24] - -Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the `xml` structs4, lenses provide a way to get the best of both worlds, at least in some situations. - -The final version of the code clocks in at 51 lines and is is available [on GitLab][25]. - -๛ - --------------------------------------------------------------------------------- - -via: https://technomancy.us/185 - -作者:[Phil Hagelberg][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://technomancy.us/ -[1]:https://con.racket-lang.org/ -[2]:https://technomancy.us/i/chronicles-of-lensmen.jpg -[3]:https://docs.racket-lang.org/reference/struct-generics.html -[4]:https://docs.racket-lang.org/reference/boxes.html?q=box#%28def._%28%28quote._~23~25kernel%29._box%29%29 -[5]:https://commons.wikimedia.org/wiki/File:BlankMap-World_gray.svg -[6]:https://docs.racket-lang.org/reference/port-lib.html#(def._((lib._racket%2Fport..rkt)._call-with-input-string)) -[7]:https://docs.racket-lang.org/xml/index.html?q=read-xml#%28def._%28%28lib._xml%2Fmain..rkt%29._read-xml%2Fdocument%29%29 -[8]:https://docs.racket-lang.org/xml/#%28def._%28%28lib._xml%2Fmain..rkt%29._document%29%29 -[9]:https://docs.racket-lang.org/reference/for.html?q=for%2Ffold#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Ffold%29%29 -[10]:https://docs.racket-lang.org/pict/Rendering.html?q=draw-pict#%28def._%28%28lib._pict%2Fmain..rkt%29._draw-pict%29%29 -[11]:https://docs.racket-lang.org/reference/dicts.html?q=dict-update#%28def._%28%28lib._racket%2Fdict..rkt%29._dict-update%29%29 -[12]:https://technomancy.us/i/first-lensman.jpg -[13]:https://docs.racket-lang.org/lens/lens-guide.html -[14]:https://github.com/jackfirth/lens/issues/290 -[15]:https://docs.racket-lang.org/pollen/second-tutorial.html?q=xexpr#%28part._.X-expressions%29 -[16]:https://docs.racket-lang.org/xml/index.html?q=xexpr#%28def._%28%28lib._xml%2Fmain..rkt%29._xml-~3exexpr%29%29 -[17]:https://docs.racket-lang.org/lens/lens-reference.html#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-compose%29%29 -[18]:https://docs.racket-lang.org/reference/procedures.html#%28def._%28%28lib._racket%2Fprivate%2Flist..rkt%29._compose%29%29 -[19]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-view%29%29 -[20]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._list-ref-lens%29%29 -[21]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._second-lens%29%29 -[22]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Fdict..rkt%29._dict-ref-lens%29%29 -[23]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-set%29%29 -[24]:https://technomancy.us/i/second-stage-lensman.jpg -[25]:https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt diff --git a/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md b/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md deleted file mode 100644 index 35c89150c8..0000000000 --- a/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md +++ /dev/null @@ -1,87 +0,0 @@ -Security Chaos Engineering: A new paradigm for cybersecurity -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C) - -Security is always changing and failure always exists. - -This toxic scenario requires a fresh perspective on how we think about operational security. We must understand that we are often the primary cause of our own security flaws. The industry typically looks at cybersecurity and failure in isolation or as separate matters. We believe that our lack of insight and operational intelligence into our own security control failures is one of the most common causes of security incidents and, subsequently, data breaches. - -> Fall seven times, stand up eight." --Japanese proverb - -The simple fact is that "to err is human," and humans derive their success as a direct result of the failures they encounter. Their rate of failure, how they fail, and their ability to understand that they failed in the first place are important building blocks to success. Our ability to learn through failure is inherent in the systems we build, the way we operate them, and the security we use to protect them. Yet there has been a lack of focus when it comes to how we approach preventative security measures, and the spotlight has trended toward the evolving attack landscape and the need to buy or build new solutions. - -### Security spending is continually rising and so are security incidents - -We spend billions on new information security technologies, however, we rarely take a proactive look at whether those security investments perform as expected. This has resulted in a continual increase in security spending on new solutions to keep up with the evolving attacks. - -Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries. We have marched so fast down this path of the "get-ahead-of-the-attacker" strategy that we haven't considered that we may be a primary cause of our own demise. How is it that we are building more and more security measures, but the problem seems to be getting worse? Furthermore, many of the notable data breaches over the past year were not the result of an advanced nation-state or spy-vs.-spy malicious advanced persistent threats (APTs); rather the principal causes of those events were incomplete implementation, misconfiguration, design flaws, and lack of oversight. - -The 2017 Ponemon Cost of a Data Breach Study breaks down the [root causes of data breaches][1] into three areas: malicious or criminal attacks, human factors or errors, and system glitches, including both IT and business-process failure. Of the three categories, malicious or criminal attacks comprises the largest distribution (47%), followed by human error (28%), and system glitches (25%). Cybersecurity vendors have historically focused on malicious root causes of data breaches, as it is the largest sole cause, but together human error and system glitches total 53%, a larger share of the overall problem. - -What is not often understood, whether due to lack of insight, reporting, or analysis, is that malicious or criminal attacks are often successful due to human error and system glitches. Both human error and system glitches are, at their root, primary markers of the existence of failure. Whether it's IT system failures, failures in process, or failures resulting from humans, it begs the question: "Should we be focusing on finding a method to identify, understand, and address our failures?" After all, it can be an arduous task to predict the next malicious attack, which often requires investment of time to sift threat intelligence, dig through forensic data, or churn threat feeds full of unknown factors and undetermined motives. Failure instrumentation, identification, and remediation are mostly comprised of things that we know, have the ability to test, and can measure. - -Failures we can analyze consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers. How often do we proactively instrument what we designed, built, and are operationally managing to determine if the controls are failing? Most organizations do not discover that their security controls were failing until a security incident results from that failure. The worst time to find out your security investment failed is during a security incident at 3 a.m. - -> Security incidents are not detective measures and hope is not a strategy when it comes to operating effective security controls. - -We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches. Failure in security controls can arise from poor control placement, technical misconfiguration, gaps in coverage, inadequate testing practices, human error, and numerous other things. - -### The journey into Security Chaos Testing - -Our venture into this new territory of Security Chaos Testing has shifted our thinking about the root cause of many of our notable security incidents and data breaches. - -We were brought together by [Bruce Wong][2], who now works at Stitch Fix with Charles, one of the authors of this article. Prior to Stitch Fix, Bruce was a founder of the Chaos Engineering and System Reliability Engineering (SRE) practices at Netflix, the company commonly credited with establishing the field. Bruce learned about this article's other author, Aaron, through the open source [ChaoSlingr][3] Security Chaos Testing tool project, on which Aaron was a contributor. Aaron was interested in Bruce's perspective on the idea of applying Chaos Engineering to cybersecurity, which led Bruce to connect us to share what we had been working on. As security practitioners, we were both intrigued by the idea of Chaos Engineering and had each begun thinking about how this new method of instrumentation might have a role in cybersecurity. - -Within a short timeframe, we began finishing each other's thoughts around testing and validating security capabilities, which we collectively call "Security Chaos Engineering." We directly challenged many of the concepts we had come to depend on in our careers, such as compensating security controls, defense-in-depth, and how to design preventative security. Quickly we realized that we needed to challenge the status quo "set-it-and-forget-it" model and instead execute on continuous instrumentation and validation of security capabilities. - -Businesses often don't fully understand whether their security capabilities and controls are operating as expected until they are not. We had both struggled throughout our careers to provide measurements on security controls that go beyond simple uptime metrics. Our journey has shown us there is a need for a more pragmatic approach that emphasizes proactive instrumentation and experimentation over blind faith. - -### Defining new terms - -In the security industry, we have a habit of not explaining terms and assuming we are speaking the same language. To correct that, here are a few key terms in this new approach: - - * **(Security) Chaos Experiments** are foundationally rooted in the scientific method, in that they seek not to validate what is already known to be true or already known to be false, rather they are focused on deriving new insights about the current state. - * **Security Chaos Engineering** is the discipline of instrumentation, identification, and remediation of failure within security controls through proactive experimentation to build confidence in the system's ability to defend against malicious conditions in production. - - - -### Security and distributed systems - -Consider the evolving nature of modern application design where systems are becoming more and more distributed, ephemeral, and immutable in how they operate. In this shifting paradigm, it is becoming difficult to comprehend the operational state and health of our systems' security. Moreover, how are we ensuring that it remains effective and vigilant as the surrounding environment is changing its parameters, components, and methodologies? - -What does it mean to be effective in terms of security controls? After all, a single security capability could easily be implemented in a wide variety of diverse scenarios in which failure may arise from many possible sources. For example, a standard firewall technology may be implemented, placed, managed, and configured differently depending on complexities in the business, web, and data logic. - -It is imperative that we not operate our business products and services on the assumption that something works. We must constantly, consistently, and proactively instrument our security controls to ensure they cut the mustard when it matters. This is why Security Chaos Testing is so important. What Security Chaos Engineering does is it provides a methodology for the experimentation of the security of distributed systems in order to build confidence in the ability to withstand malicious conditions. - -In Security Chaos Engineering: - - * Security capabilities must be end-to-end instrumented. - * Security must be continuously instrumented to build confidence in the system's ability to withstand malicious conditions. - * Readiness of a system's security defenses must be proactively assessed to ensure they are battle-ready and operating as intended. - * The security capability toolchain must be instrumented from end to end to drive new insights into not only the effectiveness of the functionality within the toolchain but also to discover where added value and improvement can be injected. - * Practiced instrumentation seeks to identify, detect, and remediate failures in security controls. - * The focus is on vulnerability and failure identification, not failure management. - * The operational effectiveness of incident management is sharpened. - - - -As Henry Ford said, "Failure is only the opportunity to begin again, this time more intelligently." Security Chaos Engineering and Security Chaos Testing give us that opportunity. - -Would you like to learn more? Join the discussion by following [@aaronrinehart][4] and [@charles_nwatu][5] on Twitter. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/new-paradigm-cybersecurity - -作者:[Aaron Rinehart][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/aaronrinehart -[1]:https://www.ibm.com/security/data-breach -[2]:https://twitter.com/bruce_m_wong?lang=en -[3]:https://github.com/Optum/ChaoSlingr -[4]:https://twitter.com/aaronrinehart -[5]:https://twitter.com/charles_nwatu diff --git a/sources/talk/20180131 How to write a really great resume that actually gets you hired.md b/sources/talk/20180131 How to write a really great resume that actually gets you hired.md deleted file mode 100644 index b54b3944ae..0000000000 --- a/sources/talk/20180131 How to write a really great resume that actually gets you hired.md +++ /dev/null @@ -1,395 +0,0 @@ -How to write a really great resume that actually gets you hired -============================================================ - - -![](https://cdn-images-1.medium.com/max/2000/1*k7HRLZAsuINP9vIs2BIh1g.png) - -This is a data-driven guide to writing a resume that actually gets you hired. I’ve spent the past four years analyzing which resume advice works regardless of experience, role, or industry. The tactics laid out below are the result of what I’ve learned. They helped me land offers at Google, Microsoft, and Twitter and have helped my students systematically land jobs at Amazon, Apple, Google, Microsoft, Facebook, and more. - -### Writing Resumes Sucks. - -It’s a vicious cycle. - -We start by sifting through dozens of articles by career “gurus,” forced to compare conflicting advice and make our own decisions on what to follow. - -The first article says “one page MAX” while the second says “take two or three and include all of your experience.” - -The next says “write a quick summary highlighting your personality and experience” while another says “summaries are a waste of space.” - -You scrape together your best effort and hit “Submit,” sending your resume into the ether. When you don’t hear back, you wonder what went wrong: - - _“Was it the single page or the lack of a summary? Honestly, who gives a s**t at this point. I’m sick of sending out 10 resumes every day and hearing nothing but crickets.”_ - - -![](https://cdn-images-1.medium.com/max/1000/1*_zQqAjBhB1R4fz55InrrIw.jpeg) -How it feels to try and get your resume read in today’s world. - -Writing resumes sucks but it’s not your fault. - -The real reason it’s so tough to write a resume is because most of the advice out there hasn’t been proven against the actual end goal of getting a job. If you don’t know what consistently works, you can’t lay out a system to get there. - -It’s easy to say “one page works best” when you’ve seen it happen a few times. But how does it hold up when we look at 100 resumes across different industries, experience levels, and job titles? - -That’s what this article aims to answer. - -Over the past four years, I’ve personally applied to hundreds of companies and coached hundreds of people through the job search process. This has given me a huge opportunity to measure, analyze, and test the effectiveness of different resume strategies at scale. - -This article is going to walk through everything I’ve learned about resumes over the past 4 years, including: - -* Mistakes that more than 95% of people make, causing their resumes to get tossed immediately - -* Three things that consistently appear in the resumes of highly effective job searchers (who go on to land jobs at the world’s best companies) - -* A quick hack that will help you stand out from the competition and instantly build relationships with whomever is reading your resume (increasing your chances of hearing back and getting hired) - -* The exact resume template that got me interviews and offers at Google, Microsoft, Twitter, Uber, and more - -Before we get to the unconventional strategies that will help set you apart, we need to make sure our foundational bases are covered. That starts with understanding the mistakes most job seekers make so we can make our resume bulletproof. - -### Resume Mistakes That 95% Of People Make - -Most resumes that come through an online portal or across a recruiter’s desk are tossed out because they violate a simple rule. - -When recruiters scan a resume, the first thing they look for is mistakes. Your resume could be fantastic, but if you violate a rule like using an unprofessional email address or improper grammar, it’s going to get tossed out. - -Our goal is to fully understand the triggers that cause recruiters/ATS systems to make the snap decisions on who stays and who goes. - -In order to get inside the heads of these decision makers, I collected data from dozens of recruiters and hiring mangers across industries. These people have several hundred years of hiring experience under their belts and they’ve reviewed 100,000+ resumes across industries. - -They broke down the five most common mistakes that cause them to cut resumes from the pile: - - -![](https://cdn-images-1.medium.com/max/1000/1*5Zbr3HFeKSjvPGZdq_LCKA.png) - -### The Five Most Common Resume Mistakes (According To Recruiters & Hiring Managers) - -Issue #1: Sloppiness (typos, spelling errors, & grammatical mistakes). Close to 60% of resumes have some sort of typo or grammatical issue. - -Solution: Have your resume reviewed by three separate sources — spell checking software, a friend, and a professional. Spell check should be covered if you’re using Microsoft Word or Google Docs to create your resume. - -A friend or family member can cover the second base, but make sure you trust them with reviewing the whole thing. You can always include an obvious mistake to see if they catch it. - -Finally, you can hire a professional editor on [Upwork][1]. It shouldn’t take them more than 15–20 minutes to review so it’s worth paying a bit more for someone with high ratings and lots of hours logged. - -Issue #2: Summaries are too long and formal. Many resumes include summaries that consist of paragraphs explaining why they are a “driven, results oriented team player.” When hiring managers see a block of text at the top of the resume, you can bet they aren’t going to read the whole thing. If they do give it a shot and read something similar to the sentence above, they’re going to give up on the spot. - -Solution: Summaries are highly effective, but they should be in bullet form and showcase your most relevant experience for the role. For example, if I’m applying for a new business sales role my first bullet might read “Responsible for driving $11M of new business in 2018, achieved 168% attainment (#1 on my team).” - -Issue #3: Too many buzz words. Remember our driven team player from the last paragraph? Phrasing like that makes hiring managers cringe because your attempt to stand out actually makes you sound like everyone else. - -Solution: Instead of using buzzwords, write naturally, use bullets, and include quantitative results whenever possible. Would you rather hire a salesperson who “is responsible for driving new business across the healthcare vertical to help companies achieve their goals” or “drove $15M of new business last quarter, including the largest deal in company history”? Skip the buzzwords and focus on results. - -Issue #4: Having a resume that is more than one page. The average employer spends six seconds reviewing your resume — if it’s more than one page, it probably isn’t going to be read. When asked, recruiters from Google and Barclay’s both said multiple page resumes “are the bane of their existence.” - -Solution: Increase your margins, decrease your font, and cut down your experience to highlight the most relevant pieces for the role. It may seem impossible but it’s worth the effort. When you’re dealing with recruiters who see hundreds of resumes every day, you want to make their lives as easy as possible. - -### More Common Mistakes & Facts (Backed By Industry Research) - -In addition to personal feedback, I combed through dozens of recruitment survey results to fill any gaps my contacts might have missed. Here are a few more items you may want to consider when writing your resume: - -* The average interviewer spends 6 seconds scanning your resume - -* The majority of interviewers have not looked at your resume until -  you walk into the room - -* 76% of resumes are discarded for an unprofessional email address - -* Resumes with a photo have an 88% rejection rate - -* 58% of resumes have typos - -* Applicant tracking software typically eliminates 75% of resumes due to a lack of keywords and phrases being present - -Now that you know every mistake you need to avoid, the first item on your to-do list is to comb through your current resume and make sure it doesn’t violate anything mentioned above. - -Once you have a clean resume, you can start to focus on more advanced tactics that will really make you stand out. There are a few unique elements you can use to push your application over the edge and finally get your dream company to notice you. - - -![](https://cdn-images-1.medium.com/max/1000/1*KthhefFO33-8tm0kBEPbig.jpeg) - -### The 3 Elements Of A Resume That Will Get You Hired - -My analysis showed that highly effective resumes typically include three specific elements: quantitative results, a simple design, and a quirky interests section. This section breaks down all three elements and shows you how to maximize their impact. - -### Quantitative Results - -Most resumes lack them. - -Which is a shame because my data shows that they make the biggest difference between resumes that land interviews and resumes that end up in the trash. - -Here’s an example from a recent resume that was emailed to me: - -> Experience - -> + Identified gaps in policies and processes and made recommendations for solutions at the department and institution level - -> + Streamlined processes to increase efficiency and enhance quality - -> + Directly supervised three managers and indirectly managed up to 15 staff on multiple projects - -> + Oversaw execution of in-house advertising strategy - -> + Implemented comprehensive social media plan - -As an employer, that tells me absolutely nothing about what to expect if I hire this person. - -They executed an in-house marketing strategy. Did it work? How did they measure it? What was the ROI? - -They also also identified gaps in processes and recommended solutions. What was the result? Did they save time and operating expenses? Did it streamline a process resulting in more output? - -Finally, they managed a team of three supervisors and 15 staffers. How did that team do? Was it better than the other teams at the company? What results did they get and how did those improve under this person’s management? - -See what I’m getting at here? - -These types of bullets talk about daily activities, but companies don’t care about what you do every day. They care about results. By including measurable metrics and achievements in your resume, you’re showcasing the value that the employer can expect to get if they hire you. - -Let’s take a look at revised versions of those same bullets: - -> Experience - -> + Managed a team of 20 that consistently outperformed other departments in lead generation, deal size, and overall satisfaction (based on our culture survey) - -> + Executed in-house marketing strategy that resulted in a 15% increase in monthly leads along with a 5% drop in the cost per lead - -> + Implemented targeted social media campaign across Instagram & Pintrest, which drove an additional 50,000 monthly website visits and generated 750 qualified leads in 3 months - -If you were in the hiring manager’s shoes, which resume would you choose? - -That’s the power of including quantitative results. - -### Simple, Aesthetic Design That Hooks The Reader - -These days, it’s easy to get carried away with our mission to “stand out.” I’ve seen resume overhauls from graphic designers, video resumes, and even resumes [hidden in a box of donuts.][2] - -While those can work in very specific situations, we want to aim for a strategy that consistently gets results. The format I saw the most success with was a black and white Word template with sections in this order: - -* Summary - -* Interests - -* Experience - -* Education - -* Volunteer Work (if you have it) - -This template is effective because it’s familiar and easy for the reader to digest. - -As I mentioned earlier, hiring managers scan resumes for an average of 6 seconds. If your resume is in an unfamiliar format, those 6 seconds won’t be very comfortable for the hiring manager. Our brains prefer things we can easily recognize. You want to make sure that a hiring manager can actually catch a glimpse of who you are during their quick scan of your resume. - -If we’re not relying on design, this hook needs to come from the  _Summary_ section at the top of your resume. - -This section should be done in bullets (not paragraph form) and it should contain 3–4 highlights of the most relevant experience you have for the role. For example, if I was applying for a New Business Sales position, my summary could look like this: - -> Summary - -> Drove quarterly average of $11M in new business with a quota attainment of 128% (#1 on my team) - -> Received award for largest sales deal of the year - -> Developed and trained sales team on new lead generation process that increased total leads by 17% in 3 months, resulting in 4 new deals worth $7M - -Those bullets speak directly to the value I can add to the company if I was hired for the role. - -### An “Interests” Section That’s Quirky, Unique, & Relatable - -This is a little “hack” you can use to instantly build personal connections and positive associations with whomever is reading your resume. - -Most resumes have a skills/interests section, but it’s usually parked at the bottom and offers little to no value. It’s time to change things up. - -[Research shows][3] that people rely on emotions, not information, to make decisions. Big brands use this principle all the time — emotional responses to advertisements are more influential on a person’s intent to buy than the content of an ad. - -You probably remember Apple’s famous “Get A Mac” campaign: - - -When it came to specs and performance, Macs didn’t blow every single PC out of the water. But these ads solidified who was “cool” and who wasn’t, which was worth a few extra bucks to a few million people. - -By tugging at our need to feel “cool,” Apple’s campaign led to a [42% increase in market share][4] and a record sales year for Macbooks. - -Now we’re going to take that same tactic and apply it to your resume. - -If you can invoke an emotional response from your recruiter, you can influence the mental association they assign to you. This gives you a major competitive advantage. - -Let’s start with a question — what could you talk about for hours? - -It could be cryptocurrency, cooking, World War 2, World of Warcraft, or how Google’s bet on segmenting their company under the Alphabet is going to impact the technology sector over the next 5 years. - -Did a topic (or two) pop into year head? Great. - -Now think about what it would be like to have a conversation with someone who was just as passionate and knew just as much as you did on the topic. It’d be pretty awesome, right?  _Finally, _ someone who gets it! - -That’s exactly the kind of emotional response we’re aiming to get from a hiring manager. - -There are five “neutral” topics out there that people enjoy talking about: - -1. Food/Drink - -2. Sports - -3. College - -4. Hobbies - -5. Geography (travel, where people are from, etc.) - -These topics are present in plenty of interest sections but we want to take them one step further. - -Let’s say you had the best night of your life at the Full Moon Party in Thailand. Which of the following two options would you be more excited to read: - -* Traveling - -* Ko Pha Ngan beaches (where the full moon party is held) - -Or, let’s say that you went to Duke (an ACC school) and still follow their basketball team. Which would you be more pumped about: - -* College Sports - -* ACC Basketball (Go Blue Devils!) - -In both cases, the second answer would probably invoke a larger emotional response because it is tied directly to your experience. - -I want you to think about your interests that fit into the five categories I mentioned above. - -Now I want you to write a specific favorite associated with each category in parentheses next to your original list. For example, if you wrote travel you can add (ask me about the time I was chased by an elephant in India) or (specifically meditation in a Tibetan monastery). - -Here is the [exact set of interests][5] I used on my resume when I interviewed at Google, Microsoft, and Twitter: - - _ABC Kitchen’s Atmosphere, Stumptown Coffee (primarily cold brew), Michael Lewis (Liar’s Poker), Fishing (especially fly), Foods That Are Vehicles For Hot Sauce, ACC Sports (Go Deacs!) & The New York Giants_ - - -![](https://cdn-images-1.medium.com/max/1000/1*ONxtGr_xUYmz4_Xe66aeng.jpeg) - -If you want to cheat here, my experience shows that anything about hot sauce is an instant conversation starter. - -### The Proven Plug & Play Resume Template - -Now that we have our strategies down, it’s time to apply these tactics to a real resume. Our goal is to write something that increases your chances of hearing back from companies, enhances your relationships with hiring managers, and ultimately helps you score the job offer. - -The example below is the exact resume that I used to land interviews and offers at Microsoft, Google, and Twitter. I was targeting roles in Account Management and Sales, so this sample is tailored towards those positions. We’ll break down each section below: - - -![](https://cdn-images-1.medium.com/max/1000/1*B2RQ89ue2dGymRdwMY2lBA.png) - -First, I want you to notice how clean this is. Each section is clearly labeled and separated and flows nicely from top to bottom. - -My summary speaks directly to the value I’ve created in the past around company culture and its bottom line: - -* I consistently exceeded expectations - -* I started my own business in the space (and saw real results) - -* I’m a team player who prioritizes culture - -I purposefully include my Interests section right below my Summary. If my hiring manager’s six second scan focused on the summary, I know they’ll be interested. Those bullets cover all the subconscious criteria for qualification in sales. They’re going to be curious to read more in my Experience section. - -By sandwiching my Interests in the middle, I’m upping their visibility and increasing the chance of creating that personal connection. - -You never know — the person reading my resume may also be a hot sauce connoisseur and I don’t want that to be overlooked because my interests were sitting at the bottom. - -Next, my Experience section aims to flesh out the points made in my Summary. I mentioned exceeding my quota up top, so I included two specific initiatives that led to that attainment, including measurable results: - -* A partnership leveraging display advertising to drive users to a gamified experience. The campaign resulted in over 3000 acquisitions and laid the groundwork for the 2nd largest deal in company history. - -* A partnership with a top tier agency aimed at increasing conversions for a client by improving user experience and upgrading tracking during a company-wide website overhaul (the client has ~20 brand sites). Our efforts over 6 months resulted in a contract extension worth 316% more than their original deal. - -Finally, I included my education at the very bottom starting with the most relevant coursework. - -Download My Resume Templates For Free - -You can download a copy of the resume sample above as well as a plug and play template here: - -Austin’s Resume: [Click To Download][6] - -Plug & Play Resume Template: [Click To Download][7] - -### Bonus Tip: An Unconventional Resume “Hack” To Help You Beat Applicant Tracking Software - -If you’re not already familiar, Applicant Tracking Systems are pieces of software that companies use to help “automate” the hiring process. - -After you hit submit on your online application, the ATS software scans your resume looking for specific keywords and phrases (if you want more details, [this article][8] does a good job of explaining ATS). - -If the language in your resume matches up, the software sees it as a good fit for the role and will pass it on to the recruiter. However, even if you’re highly qualified for the role but you don’t use the right wording, your resume can end up sitting in a black hole. - -I’m going to teach you a little hack to help improve your chances of beating the system and getting your resume in the hands of a human: - -Step 1: Highlight and select the entire job description page and copy it to your clipboard. - -Step 2: Head over to [WordClouds.com][9] and click on the “Word List” button at the top. Towards the top of the pop up box, you should see a link for Paste/Type Text. Go ahead and click that. - -Step 3: Now paste the entire job description into the box, then hit “Apply.” - -WordClouds is going to spit out an image that showcases every word in the job description. The larger words are the ones that appear most frequently (and the ones you want to make sure to include when writing your resume). Here’s an example for a data a science role: - - -![](https://cdn-images-1.medium.com/max/1000/1*O7VO1C9nhC9LZct7vexTbA.png) - -You can also get a quantitative view by clicking “Word List” again after creating your cloud. That will show you the number of times each word appeared in the job description: - -9 data - -6 models - -4 experience - -4 learning - -3 Experience - -3 develop - -3 team - -2 Qualifications - -2 statistics - -2 techniques - -2 libraries - -2 preferred - -2 research - -2 business - -When writing your resume, your goal is to include those words in the same proportions as the job description. - -It’s not a guaranteed way to beat the online application process, but it will definitely help improve your chances of getting your foot in the door! - -* * * - -### Want The Inside Info On Landing A Dream Job Without Connections, Without “Experience,” & Without Applying Online? - -[Click here to get the 5 free strategies that my students have used to land jobs at Google, Microsoft, Amazon, and more without applying online.][10] - - _Originally published at _ [_cultivatedculture.com_][11] _._ - --------------------------------------------------------------------------------- - -作者简介: - -I help people land jobs they love and salaries they deserve at CultivatedCulture.com - ----------- - -via: https://medium.freecodecamp.org/how-to-write-a-really-great-resume-that-actually-gets-you-hired-e18533cd8d17 - -作者:[Austin Belcak ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.freecodecamp.org/@austin.belcak -[1]:http://www.upwork.com/ -[2]:https://www.thrillist.com/news/nation/this-guy-hides-his-resume-in-boxes-of-donuts-to-score-job-interviews -[3]:https://www.psychologytoday.com/blog/inside-the-consumer-mind/201302/how-emotions-influence-what-we-buy -[4]:https://www.businesswire.com/news/home/20070608005253/en/Apple-Mac-Named-Successful-Marketing-Campaign-2007 -[5]:http://cultivatedculture.com/resume-skills-section/ -[6]:https://drive.google.com/file/d/182gN6Kt1kBCo1LgMjtsGHOQW2lzATpZr/view?usp=sharing -[7]:https://drive.google.com/open?id=0B3WIcEDrxeYYdXFPVlcyQlJIbWc -[8]:https://www.jobscan.co/blog/8-things-you-need-to-know-about-applicant-tracking-systems/ -[9]:https://www.wordclouds.com/ -[10]:https://cultivatedculture.com/dreamjob/ -[11]:https://cultivatedculture.com/write-a-resume/ \ No newline at end of file diff --git a/sources/talk/20180206 UQDS- A software-development process that puts quality first.md b/sources/talk/20180206 UQDS- A software-development process that puts quality first.md deleted file mode 100644 index e9f7bb94ac..0000000000 --- a/sources/talk/20180206 UQDS- A software-development process that puts quality first.md +++ /dev/null @@ -1,99 +0,0 @@ -UQDS: A software-development process that puts quality first -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag) - -The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper). - -Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on. - -When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful. - -UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability. - -### UQDS's building blocks - -UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born. - -UQDS is based upon fundamental building blocks, each with their own carefully considered best practices: - - 1. Tickets - 2. Branches - 3. Tests - 4. Reviews - 5. No exceptions - - - -Let's go into each of those in a little more detail. - -#### Tickets - -In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why. - - * Tickets should define clear, measurable goals. - * Work on a ticket does not begin until the ticket contains goals that are clearly defined. - - - -#### Branches - -Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved. - -Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase. - -#### Tests - -UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely. - -Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be. - -A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate. - -#### Reviews - -While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules: - - 1. Each commit must be reviewed by a different person than the author. - 2. Start with a comment thanking the contributor for their work. - 3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!"). - 4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices."). - 5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review. - - - -These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress. - -#### No exceptions - -In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever. - -Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise. - -### A downside to UQDS - -While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project. - -To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project. - -### Learn more - -The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/uqds - -作者:[Moshe Zadka][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/moshez -[1]:https://twistedmatrix.com/trac/ -[2]:http://structure.usc.edu/svn/svn.branchmerge.html -[3]:http://webchat.freenode.net/?channels=%23twisted -[4]:https://twisted.reviews -[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity -[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem diff --git a/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md b/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md deleted file mode 100644 index 7a1a837ff9..0000000000 --- a/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md +++ /dev/null @@ -1,73 +0,0 @@ -Why Mainframes Aren't Going Away Any Time Soon -====== - -![](http://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/styles/article_featured_standard/public/ibm%20z13%20mainframe%202015%20getty.jpg?itok=uB8agshi) - -IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said. - -This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT. - -**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2] - -Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe. - -"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster. - -**Related:** [IBM Designs a “Performance Beast” for AI][3] - -"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes." - -Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management. - -Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group. - -"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform." - -The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages. - -"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen." - -In addition to saving on physical data center resources, this customer by design would likely see better performance. - -"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture." - -The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z. - -"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted." - -Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup. - -The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up? - -"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens." - -Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work. - -The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines. - -"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened." - -Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened." - -So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math. - -"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage. - -"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick. - -"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?" - --------------------------------------------------------------------------------- - -via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon - -作者:[Christine Hall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.datacenterknowledge.com/archives/author/christine-hall -[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm -[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted -[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai diff --git a/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md b/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md deleted file mode 100644 index c3c78e84ad..0000000000 --- a/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md +++ /dev/null @@ -1,127 +0,0 @@ -Arch Anywhere Is Dead, Long Live Anarchy Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_main.jpg?itok=fyBpTjQW) - -Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy. - -Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution. - -Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does. - -### The installation - -The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer. - -To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users. - -![Updating the mirror][6] - -Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation. - -[Used with permission][7] - -From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard. - -![partitioning][9] - -Figure 2: Anarchy makes partitioning easy. - -[Used with permission][7] - -You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue. - -The next screen that could give new users pause, requires the selection of the desired installation. There are five options: - - * Anarchy-Desktop - - * Anarchy-Desktop-LTS - - * Anarchy-Server - - * Anarchy-Server-LTS - - * Anarchy-Advanced - - - - -If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4. -Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later. - - -![software][11] - -Figure 3: Selecting the software you want on your system. - -[Used with permission][7] - -Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard. - -![ready to install][13] - -Figure 4: Everything is ready to install. - -[Used with permission][7] - -Once the installation completes, reboot and enjoy Anarchy. - -### Post install - -I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5). - -![GNOME and Budgie][15] - -Figure 5: GNOME is on the right, Budgie is on the left. - -[Used with permission][7] - -My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6). - -![GNOME and Budgie][17] - -Figure 6: The GNOME Dash and the Budgie menu are nothing alike. - -[Used with permission][7] - -Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done. - -### Why use Arch Linux without the “Arch”? - -This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux. - -That’s why you might opt for Arch Linux without the Arch. - -Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy. - -Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux - -作者:[Jack Wallen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://anarchy-linux.org/ -[2]:https://www.ubuntu.com/ -[3]:https://linuxmint.com/ -[4]:https://elementary.io/ -[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror) -[7]:https://www.linux.com/licenses/category/used-permission -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning) -[10]:/files/images/anarchyinstall3jpg -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software) -[12]:/files/images/anarchyinstall4jpg -[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install) -[14]:/files/images/anarchyinstall5jpg -[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie) -[16]:/files/images/anarchyinstall6jpg -[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie) -[18]:https://www.archlinux.org/ -[19]:https://www.gentoo.org/ -[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md deleted file mode 100644 index 55618326c6..0000000000 --- a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md +++ /dev/null @@ -1,149 +0,0 @@ -How writing can change your career for the better, even if you don't identify as a writer -====== -Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed? - -Early in the book, Kondo talks about keeping possessions that "spark joy." In this article, I'll examine ways writing about what we and other people are doing in the open source world can "spark joy," or at least how writing can improve your career in unexpected ways. - -Because I'm a community manager and editor on Opensource.com, you might be thinking, "She just wants us to [write for Opensource.com][2]." And that is true. But everything I will tell you about why you should write is true, even if you never send a story in to Opensource.com. Writing can change your career for the better, even if you don't identify as a writer. Let me explain. - -### How I started writing - -Early in the first decade of my career, I transitioned from a customer service-related role at a tech publishing company into an editing role on Sys Admin Magazine. I was plugging along, happily laying low in my career, and then that all changed when I started writing about open source technologies and communities, and the people in them. But I did _not_ start writing voluntarily. The tl;dr: of it is that my colleagues at Linux New Media eventually talked me into launching our first blog on the [Linux Pro Magazine][3] site. And as it turns out, it was one of the best career decisions I've ever made. I would not be working on Opensource.com today had I not started writing about what other people in open source were doing all those years ago. - -When I first started writing, my goal was to raise awareness of the company I worked for and our publications, while also helping raise the visibility of women in tech. But soon after I started writing, I began seeing unexpected results. - -#### My network started growing - -When I wrote about a person, an organization, or a project, I got their attention. Suddenly the people I wrote about knew who I was. And because I was sharing knowledge—that is to say, I wasn't being a critic—I'd generally become an ally, and in many cases, a friend. I had a platform and an audience, and I was sharing them with other people in open source. - -#### I was learning - -In addition to promoting our website and magazine and growing my network, the research and fact-checking I did when writing articles helped me become more knowledgeable in my field and improve my tech chops. - -#### I started meeting more people IRL - -When I went to conferences, I found that my blog posts helped me meet people. I introduced myself to people I'd written about or learned about during my research, and I met new people to interview. People started knowing who I was because they'd read my articles. Sometimes people were even excited to meet me because I'd highlighted them, their projects, or someone or something they were interested in. I had no idea writing could be so exciting and interesting away from the keyboard. - -#### My conference talks improved - -I started speaking at events about a year after launching my blog. A few years later, I started writing articles based on my talks prior to speaking at events. The process of writing the articles helps me organize my talks and slides, and it was a great way to provide "notes" for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person. - -### What should you write about? - -Maybe you're interested in writing, but you struggle with what to write about. You should write about two things: what you know, and what you don't know. - -#### Write about what you know - -Writing about what you know can be relatively easy. For example, a script you wrote to help automate part of your daily tasks might be something you don't give any thought to, but it could make for a really exciting article for someone who hates doing that same task every day. That could be a relatively quick, short, and easy article for you to write, and you might not even think about writing it. But it could be a great contribution to the open source community. - -#### Write about what you don't know - -Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it. - -> "When I write about a technical topic, I usually learn a lot more about it. I want to make sure my article is as good as it can be. So even if I'm writing about something I know well, I'll research the topic a bit more so I can make sure to get everything right." ~Jim Hall, FreeDOS project leader - -For example, I wanted to learn about machine learning, and I thought narrowing down the topic would help me get started. My team mate Jason Baker suggested that I write an article on the [Top 3 machine learning libraries for Python][4], which gave me a focus for research. - -The process of researching that article inspired another article, [3 cool machine learning projects using TensorFlow and the Raspberry Pi][5]. That article was also one of our most popular last year. I'm not an _expert_ on machine learning now, but researching the topic with writing an article in mind allowed me to give myself a crash course in the topic. - -### Why people in tech write - -Now let's look at a few benefits of writing that other people in tech have found. I emailed the Opensource.com writers' list and asked, and here's what writers told me. - -#### Grow your network or your project community - -Xavier Ho wrote for us for the first time last year ("[A programmer's cleaning guide for messy sensor data][6]"). He says: "I've been getting Twitter mentions from all over the world, including Spain, US, Australia, Indonesia, the UK, and other European countries. It shows the article is making some impact... This is the kind of reach I normally don't have. Hope it's really helping someone doing similar work!" - -#### Help people - -Writing about what other people are working on is a great way to help your fellow community members. Antoine Thomas, who wrote "[Linux helped me grow as a musician][7]", says, "I began to use open source years ago, by reading tutorials and documentation. That's why now I share my tips and tricks, experience or knowledge. It helped me to get started, so I feel that it's my turn to help others to get started too." - -#### Give back to the community - -[Jim Hall][8], who started the [FreeDOS project][9], says, "I like to write ... because I like to support the open source community by sharing something neat. I don't have time to be a program maintainer anymore, but I still like to do interesting stuff. So when something cool comes along, I like to write about it and share it." - -#### Highlight your community - -Emilio Velis wrote an article, "[Open hardware groups spread across the globe][10]", about projects in Central and South America. He explains, "I like writing about specific aspects of the open culture that are usually enclosed in my region (Latin America). I feel as if smaller communities and their ideas are hidden from the mainstream, so I think that creating this sense of broadness in participation is what makes some other cultures as valuable." - -#### Gain confidence - -[Don Watkins][11] is one of our regular writers and a [community moderator][12]. He says, "When I first started writing I thought I was an impostor, later I realized that many people feel that way. Writing and contributing to Opensource.com has been therapeutic, too, as it contributed to my self esteem and helped me to overcome feelings of inadequacy. … Writing has given me a renewed sense of purpose and empowered me to help others to write and/or see the valuable contributions that they too can make if they're willing to look at themselves in a different light. Writing has kept me younger and more open to new ideas." - -#### Get feedback - -One of our writers described writing as a feedback loop. He said that he started writing as a way to give back to the community, but what he found was that community responses give back to him. - -Another writer, [Stuart Keroff][13] says, "Writing for Opensource.com about the program I run at school gave me valuable feedback, encouragement, and support that I would not have had otherwise. Thousands upon thousands of people heard about the Asian Penguins because of the articles I wrote for the website." - -#### Exhibit expertise - -Writing can help you show that you've got expertise in a subject, and having writing samples on well-known websites can help you move toward better pay at your current job, get a new role at a different organization, or start bringing in writing income. - -[Jeff Macharyas][14] explains, "There are several ways I've benefitted from writing for Opensource.com. One, is the credibility I can add to my social media sites, resumes, bios, etc., just by saying 'I am a contributing writer to Opensource.com.' … I am hoping that I will be able to line up some freelance writing assignments, using my Opensource.com articles as examples, in the future." - -### Where should you publish your articles? - -That depends. Why are you writing? - -You can always post on your personal blog, but if you don't already have a lot of readers, your article might get lost in the noise online. - -Your project or company blog is a good option—again, you'll have to think about who will find it. How big is your company's reach? Or will you only get the attention of people who already give you their attention? - -Are you trying to reach a new audience? A bigger audience? That's where sites like Opensource.com can help. We attract more than a million page views a month, and more than 700,000 unique visitors. Plus you'll work with editors who will polish and help promote your article. - -We aren't the only site interested in your story. What are your favorite sites to read? They might want to help you share your story, and it's ok to pitch to multiple publications. Just be transparent about whether your article has been shared on other sites when working with editors. Occasionally, editors can even help you modify articles so that you can publish variations on multiple sites. - -#### Do you want to get rich by writing? (Don't count on it.) - -If your goal is to make money by writing, pitch your article to publications that have author budgets. There aren't many of them, the budgets don't tend to be huge, and you will be competing with experienced professional tech journalists who write seven days a week, 365 days a year, with large social media followings and networks. I'm not saying it can't be done—I've done it—but I am saying don't expect it to be easy or lucrative. It's not. (And frankly, I've found that nothing kills my desire to write much like having to write if I want to eat...) - -A couple of people have asked me whether Opensource.com pays for content, or whether I'm asking someone to write "for exposure." Opensource.com does not have an author budget, but I won't tell you to write "for exposure," either. You should write because it meets a need. - -If you already have a platform that meets your needs, and you don't need editing or social media and syndication help: Congratulations! You are privileged. - -### Spark joy! - -Most people don't know they have a story to tell, so I'm here to tell you that you probably do, and my team can help, if you just submit a proposal. - -Most people—myself included—could use help from other people. Sites like Opensource.com offer one way to get editing and social media services at no cost to the writer, which can be hugely valuable to someone starting out in their career, someone who isn't a native English speaker, someone who wants help with their project or organization, and so on. - -If you don't already write, I hope this article helps encourage you to get started. Or, maybe you already write. In that case, I hope this article makes you think about friends, colleagues, or people in your network who have great stories and experiences to share. I'd love to help you help them get started. - -I'll conclude with feedback I got from a recent writer, [Mario Corchero][15], a Senior Software Developer at Bloomberg. He says, "I wrote for Opensource because you told me to :)" (For the record, I "invited" him to write for our [PyCon speaker series][16] last year.) He added, "And I am extremely happy about it—not only did it help me at my workplace by gaining visibility, but I absolutely loved it! The article appeared in multiple email chains about Python and was really well received, so I am now looking to publish the second :)" Then he [wrote for us][17] again. - -I hope you find writing to be as fulfilling as we do. - -You can connect with Opensource.com editors, community moderators, and writers in our Freenode [IRC][18] channel #opensource.com, and you can reach me and the Opensource.com team by email at [open@opensource.com][19]. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/career-changing-magic-writing - -作者:[Rikki Endsley][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/rikki-endsley -[1]:http://tidyingup.com/books/the-life-changing-magic-of-tidying-up-hc -[2]:https://opensource.com/how-submit-article -[3]:http://linuxpromagazine.com/ -[4]:https://opensource.com/article/17/2/3-top-machine-learning-libraries-python -[5]:https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi -[6]:https://opensource.com/article/17/9/messy-sensor-data -[7]:https://opensource.com/life/16/9/my-linux-story-musician -[8]:https://opensource.com/users/jim-hall -[9]:http://www.freedos.org/ -[10]:https://opensource.com/article/17/6/open-hardware-latin-america -[11]:https://opensource.com/users/don-watkins -[12]:https://opensource.com/community-moderator-program -[13]:https://opensource.com/education/15/3/asian-penguins-Linux-middle-school-club -[14]:https://opensource.com/users/jeffmacharyas -[15]:https://opensource.com/article/17/5/understanding-datetime-python-primer -[16]:https://opensource.com/tags/pycon -[17]:https://opensource.com/article/17/9/python-logging -[18]:https://opensource.com/article/16/6/getting-started-irc -[19]:mailto:open@opensource.com diff --git a/sources/talk/20180209 Why an involved user community makes for better software.md b/sources/talk/20180209 Why an involved user community makes for better software.md deleted file mode 100644 index 2b51023e44..0000000000 --- a/sources/talk/20180209 Why an involved user community makes for better software.md +++ /dev/null @@ -1,47 +0,0 @@ -Why an involved user community makes for better software -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_cubestalk.png?itok=Ozw4NhGW) - -Imagine releasing a major new infrastructure service based on open source software only to discover that the product you deployed had evolved so quickly that the documentation for the version you released is no longer available. At Bloomberg, we experienced this problem firsthand in our deployment of OpenStack. In late 2016, we spent six months testing and rolling out [Liberty][1] on our OpenStack environment. By that time, Liberty was about a year old, or two versions behind the latest build. - -As our users started taking advantage of its new functionality, we found ourselves unable to solve a few tricky problems and to answer some detailed questions about its API. When we went looking for Liberty's documentation, it was nowhere to be found on the OpenStack website. Liberty, it turned out, had been labeled "end of life" and was no longer supported by the OpenStack developer community. - -The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. The documentation was stored in the source branch along with the source code, and, as Liberty was superseded by newer versions, it had been deleted. Worse, in the intervening months, the documentation for the newer versions had been completely restructured, and there was no way to easily rebuild it in a useful form. And believe me, we tried. - -The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. ]After consulting other users and our vendor, we found that OpenStack's development cadence of two releases per year had created some unintended, yet deeply frustrating, consequences. Older releases that were typically still widely in use were being superseded and effectively killed for the purposes of support. - -Eventually, conversations took place between OpenStack users and developers that resulted in changes. Documentation was moved out of the source branch, and users can now build documentation for whatever version they're using—more or less indefinitely. The problem was solved. (I'm especially indebted to my colleague [Chris Morgan][2], who was knee-deep in this effort and first wrote about it in detail for the [OpenStack Superuser blog][3].) - -Many other enterprise users were in the same boat as Bloomberg—running older versions of OpenStack that are three or four versions behind the latest build. There's a good reason for that: On average it takes a reasonably large enterprise about six months to qualify, test, and deploy a new version of OpenStack. And, from my experience, this is generally true of most open source infrastructure projects. - -For most of the past decade, companies like Bloomberg that adopted open source software relied on distribution vendors to incorporate, test, verify, and support much of it. These vendors provide long-term support (LTS) releases, which enable enterprise users to plan for upgrades on a two- or three-year cycle, knowing they'll still have support for a year or two, even if their deployment schedule slips a bit (as they often do). In the past few years, though, infrastructure software has advanced so rapidly that even the distribution vendors struggle to keep up. And customers of those vendors are yet another step removed, so many are choosing to deploy this type of software without vendor support. - -Losing vendor support also usually means there are no LTS releases; OpenStack, Kubernetes, and Prometheus, and many more, do not yet provide LTS releases of their own. As a result, I'd argue that healthy interaction between the development and user community should be high on the list of considerations for adoption of any open source infrastructure. Do the developers building the software pay attention to the needs—and frustrations—of the people who deploy it and make it useful for their enterprise? - -There is a solid model for how this should happen. We recently joined the [Cloud Native Computing Foundation][4], part of The Linux Foundation. It has a formal [end-user community][5], whose members include organizations just like us: enterprises that are trying to make open source software useful to their internal customers. Corporate members also get a chance to have their voices heard as they vote to select a representative to serve on the CNCF [Technical Oversight Committee][6]. Similarly, in the OpenStack community, Bloomberg is involved in the semi-annual Operators Meetups, where companies who deploy and support OpenStack for their own users get together to discuss their challenges and provide guidance to the OpenStack developer community. - -The past few years have been great for open source infrastructure. If you're working for a large enterprise, the opportunity to deploy open source projects like the ones mentioned above has made your company more productive and more agile. - -As large companies like ours begin to consume more open source software to meet their infrastructure needs, they're going to be looking at a long list of considerations before deciding what to use: license compatibility, out-of-pocket costs, and the health of the development community are just a few examples. As a result of our experiences, we'll add the presence of a vibrant and engaged end-user community to the list. - -Increased reliance on open source infrastructure projects has also highlighted a key problem: People in the development community have little experience deploying the software they work on into production environments or supporting the people who use it to get things done on a daily basis. The fast pace of updates to these projects has created some unexpected problems for the people who deploy and use them. There are numerous examples I can cite where open source projects are updated so frequently that new versions will, usually unintentionally, break backwards compatibility. - -As open source increasingly becomes foundational to the operation of so many enterprises, this cannot be allowed to happen, and members of the user community should assert themselves accordingly and press for the creation of formal representation. In the end, the software can only be better. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/important-conversation - -作者:[Kevin P.Fleming][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/kpfleming -[1]:https://releases.openstack.org/liberty/ -[2]:https://www.linkedin.com/in/mihalis68/ -[3]:http://superuser.openstack.org/articles/openstack-at-bloomberg/ -[4]:https://www.cncf.io/ -[5]:https://www.cncf.io/people/end-user-community/ -[6]:https://www.cncf.io/people/technical-oversight-committee/ diff --git a/sources/talk/20180214 Can anonymity and accountability coexist.md b/sources/talk/20180214 Can anonymity and accountability coexist.md deleted file mode 100644 index 8b15ed169c..0000000000 --- a/sources/talk/20180214 Can anonymity and accountability coexist.md +++ /dev/null @@ -1,79 +0,0 @@ -Can anonymity and accountability coexist? -========================================= - -Anonymity might be a boon to more open, meritocratic organizational cultures. But does it conflict with another important value: accountability? - -![Can anonymity and accountability coexist?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_B.png?itok=SkP1mUt5 "Can anonymity and accountability coexist?") - -Image by :opensource.com - -### Get the newsletter - -Join the 85,000 open source advocates who receive our giveaway alerts and article roundups. - -Whistleblowing protections, crowdsourcing, anonymous voting processes, and even Glassdoor reviews—anonymous speech may take many forms in organizations. - -As well-established and valued as these anonymous feedback mechanisms may be, anonymous speech becomes a paradoxical idea when one considers how to construct a more open organization. While an inability to discern speaker identity seems non-transparent, an opportunity for anonymity may actually help achieve a _more inclusive and meritocratic_ environment. - -More about open organizations - -* [Download free Open Org books](https://opensource.com/open-organization/resources/book-series?src=too_resource_menu1a) -* [What is an Open Organization?](https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu2a) -* [How open is your organization?](https://opensource.com/open-organization/resources/open-org-maturity-model?src=too_resource_menu3a) -* [What is an Open Decision?](https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu4a) -* [The Open Org two years later](https://www.redhat.com/en/about/blog/open-organization-two-years-later-and-going-strong?src=too_resource_menu4b&intcmp=70160000000h1s6AAA) - -But before allowing outlets for anonymous speech to propagate, however, leaders of an organization should carefully reflect on whether an organization's "closed" practices make anonymity the unavoidable alternative to free, non-anonymous expression. Though some assurance of anonymity is necessary in a few sensitive and exceptional scenarios, dependence on anonymous feedback channels within an organization may stunt the normalization of a culture that encourages diversity and community. - -### The benefits of anonymity - -In the case of [_Talley v. California (1960)_](https://supreme.justia.com/cases/federal/us/362/60/case.html), the Supreme Court voided a city ordinance prohibiting the anonymous distribution of handbills, asserting that "there can be no doubt that such an identification requirement would tend to restrict freedom to distribute information and thereby freedom of expression." Our judicial system has legitimized the notion that the protection of anonymity facilitates the expression of otherwise unspoken ideas. A quick scroll through any [subreddit](https://www.reddit.com/reddits/) exemplifies what the Court has codified: anonymity can foster [risk-taking creativity](https://www.reddit.com/r/sixwordstories/) and the [inclusion and support of marginalized voices](https://www.reddit.com/r/MyLittleSupportGroup/). Anonymity empowers individuals by granting them the safety to speak without [detriment to their reputations or, more importantly, their physical selves.](https://www.psychologytoday.com/blog/the-compassion-chronicles/201711/why-dont-victims-sexual-harassment-come-forward-sooner) - -For example, an anonymous suggestion program to garner ideas from members or employees in an organization may strengthen inclusivity and enhance the diversity of suggestions the organization receives. It would also make for a more meritocratic decision-making process, as anonymity would ensure that the quality of the articulated idea, rather than the rank and reputation of the articulator, is what's under evaluation. Allowing members to anonymously vote for anonymously-submitted ideas would help curb the influence of office politics in decisions affecting the organization's growth. - -### The harmful consequences of anonymity - -Yet anonymity and the open value of _accountability_ may come into conflict with one another. For instance, when establishing anonymous programs to drive greater diversity and more meritocratic evaluation of ideas, organizations may need to sacrifice the ability to hold speakers accountable for the opinions they express. - -Reliance on anonymous speech for serious organizational decision-making may also contribute to complacency in an organizational culture that falls short of openness. Outlets for anonymous speech may be as similar to open as crowdsourcing is—or rather, is not. [Like efforts to crowdsource creative ideas](https://opensource.com/business/10/4/why-open-source-way-trumps-crowdsourcing-way), anonymous suggestion programs may create an organizational environment in which diverse perspectives are only valued when an organization's leaders find it convenient to take advantage of members' ideas. - -Anonymity and the open value of accountability may come into conflict with one another. - -A similar concern holds for anonymous whistle-blowing or concern submission. Though anonymity is important for sexual harassment and assault reporting, regularly redirecting member concerns and frustrations to a "complaints box" makes it more difficult for members to hold their organization's leaders accountable for acting on concerns. It may also hinder intra-organizational support networks and advocacy groups from forming around shared concerns, as members would have difficulty identifying others with similar experiences. For example, many working mothers might anonymously submit requests for a lactation room in their workplace, then falsely attribute a lack of action from leaders to a lack of similar concerns from others. - -### An anonymity checklist - -Organizations in which anonymous speech is the primary mode of communication, like subreddits, have generated innovative works and thought-provoking discourse. These anonymous networks call attention to the potential for anonymity to help organizations pursue open values of diversity and meritocracy. Organizations in which anonymous speech is _not_ the main form of communication should acknowledge the strengths of anonymous speech, but carefully consider whether anonymity is the wisest means to the goal of sustainable openness. - -Leaders may find reflecting on the following questions useful prior to establishing outlets for anonymous feedback within their organizations: - -1\. _Availability of additional communication mechanisms_: Rather than investing time and resources into establishing a new, anonymous channel for communication, can the culture or structure of existing avenues of communication be reconfigured to achieve the same goal? This question echoes the open source affinity toward realigning, rather than reinventing, the wheel. - -2\. _Failure of other communication avenues:_ How and why is the organization ill-equipped to handle the sensitive issue/situation at hand through conventional (i.e. non-anonymous) means of communication? - -Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency. - -3\. _Consequences of anonymity:_ If implemented, could the anonymous mechanism stifle the normalization of face-to-face discourse about issues important to the organization's growth? If so, how can leaders ensure that members consider the anonymous communication channel a "last resort," without undermining the legitimacy of the anonymous system? - -4\. _Designing the anonymous communication channel:_ How can accountability be promoted in anonymous communication without the ability to determine the identity of speakers? - -5\. _Long-term considerations_: Is the anonymous feedback mechanism sustainable, or a temporary solution to a larger organizational issue? If the latter, is [launching a campaign](https://opensource.com/open-organization/16/6/8-steps-more-open-communications) to address overarching problems with the organization's communication culture feasible? - -These five points build off of one another to help leaders recognize the tradeoffs involved in legitimizing anonymity within their organization. Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency with a non-inclusive organizational structure. - -About the author ----------------- - -[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/osdc_default_avatar_1.png?itok=mmbfqFXm)](https://opensource.com/users/susiechoi) - -Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality. - -[More about me](https://opensource.com/users/susiechoi) - -* * * - -via: [https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity](https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity) - -作者: [Susie Choi](https://opensource.com/users/susiechoi) 选题者: [@lujun9972](https://github.com/lujun9972) 译者: [译者ID](https://github.com/译者ID) 校对: [校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md b/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md deleted file mode 100644 index a868ed28d5..0000000000 --- a/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md +++ /dev/null @@ -1,140 +0,0 @@ -Q4OS Makes Linux Easy for Everyone -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os-main.png?itok=WDatcV-a) - -Modern Linux distributions tend to target a variety of users. Some claim to offer a flavor of the open source platform that anyone can use. And, I’ve seen some such claims succeed with aplomb, while others fall flat. [Q4OS][1] is one of those odd distributions that doesn’t bother to make such a claim but pulls off the feat anyway. - -So, who is the primary market for Q4OS? According to its website, the distribution is a: - -“fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing.” - -What’s very interesting here is that the Q4OS developers offer commercial support for the desktop. Said support can cover the likes of system customization (including core level API programming) as well as user interface modifications. - -Once you understand this (and have installed Q4OS), the target audience becomes quite obvious: Business users looking for a Windows XP/7 replacement. But that should not prevent home users from giving Q4OS at try. It’s a Linux distribution that has a few unique tools that come together to make a solid desktop distribution. - -Let’s take a look at Q4OS and see if it’s a version of Linux that might work for you. - -### What Q4OS all about - -Q4OS that does an admirable job of being the open source equivalent of Windows XP/7. Out of the box, it pulls this off with the help of the [Trinity Desktop][2] (a fork of KDE). With a few tricks up its sleeve, Q4OS turns the Trinity Desktop into a remarkably similar desktop (Figure 1). - -![default desktop][4] - -Figure 1: The Q4OS default desktop. - -[Used with permission][5] - -When you fire up the desktop, you will be greeted by a Welcome screen that makes it very easy for new users to start setting up their desktop with just a few clicks. From this window, you can: - - * Run the Desktop Profiler (which allows you to select which desktop environment to use as well as between a full-featured desktop, a basic desktop, or a minimal desktop—Figure 2). - - * Install applications (which opens the Synaptic Package Manager). - - * Install proprietary codecs (which installs all the necessary media codecs for playing audio and video). - - * Turn on Desktop effects (if you want more eye candy, turn this on). - - * Switch to Kickoff start menu (switches from the default start menu to the newer kickoff menu). - - * Set Autologin (allows you to set login such that it won’t require your password upon boot). - - - - -![Desktop Profiler][7] - -Figure 2: The Desktop Profiler allows you to further customize your desktop experience. - -[Used with permission][5] - -If you want to install a different desktop environment, open up the Desktop Profiler and then click the Desktop environments drop-down, in the upper left corner of the window. A new window will appear, where you can select your desktop of choice from the drop-down (Figure 3). Once back at the main Profiler Window, select which type of desktop profile you want, and then click Install. - -![Desktop Profiler][9] - -Figure 3: Installing a different desktop is quite simple from within the Desktop Profiler. - -[Used with permission][5] - -Note that installing a different desktop will not wipe the default desktop. Instead, it will allow you to select between the two desktops (at the login screen). - -### Installed software - -After selecting full-featured desktop, from the Desktop Profiler, I found the following user applications ready to go: - - * LibreOffice 5.2.7.2 - - * VLC 2.2.7 - - * Google Chrome 64.0.3282 - - * Thunderbird 52.6.0 (Includes Lightning addon) - - * Synaptic 0.84.2 - - * Konqueror 14.0.5 - - * Firefox 52.6.0 - - * Shotwell 0.24.5 - - - - -Obviously some of those applications are well out of date. Since this distribution is based on Debian, we can run and update/upgrade with the commands: -``` -sudo apt update - -sudo apt upgrade - -``` - -However, after running both commands, it seems everything is up to date. This particular release (2.4) is an LTS release (supported until 2022). Because of this, expect software to be a bit behind. If you want to test out the bleeding edge version (based on Debian “Buster”), you can download the testing image [here][10]. - -### Security oddity - -There is one rather disturbing “feature” found in Q4OS. In the developer’s quest to make the distribution closely resemble Windows, they’ve made it such that installing software (from the command line) doesn’t require a password! You read that correctly. If you open the Synaptic package manager, you’re asked for a password. However (and this is a big however), open up a terminal window and issue a command like sudo apt-get install gimp. At this point, the software will install… without requiring the user to type a sudo password. - -Did you cringe at that? You should. - -I get it, the developers want to ease away the burden of Linux and make a platform the masses could easily adapt to. They’ve done a splendid job of doing just that. However, in the process of doing so, they’ve bypassed a crucial means of security. Is having as near an XP/7 clone as you can find on Linux worth that lack of security? I would say that if it enables more people to use Linux, then yes. But the fact that they’ve required a password for Synaptic (the GUI tool most Windows users would default to for software installation) and not for the command-line tool makes no sense. On top of that, bypassing passwords for the apt and dpkg commands could make for a significant security issue. - -Fear not, there is a fix. For those that prefer to require passwords for the command line installation of software, you can open up the file /etc/sudoers.d/30_q4os_apt and comment out the following three lines: -``` -%sudo ALL = NOPASSWD: /usr/bin/apt-get * - -%sudo ALL = NOPASSWD: /usr/bin/apt-key * - -%sudo ALL = NOPASSWD: /usr/bin/dpkg * - -``` - -Once commented out, save and close the file, and reboot the system. At this point, users will now be prompted for a password, should they run the apt-get, apt-key, or dpkg commands. - -### A worthy contender - -Setting aside the security curiosity, Q4OS is one of the best attempts at recreating Windows XP/7 I’ve come across in a while. If you have users who fear change, and you want to migrate them away from Windows, this distribution might be exactly what you need. I would, however, highly recommend you re-enable passwords for the apt-get, apt-key, and dpkg commands… just to be on the safe side. - -In any case, the addition of the Desktop Profiler, and the ability to easily install alternative desktops, makes Q4OS a distribution that just about anyone could use. - -Learn more about Linux through the free ["Introduction to Linux" ][11]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/2/q4os-makes-linux-easy-everyone - -作者:[JACK WALLEN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://q4os.org -[2]:https://www.trinitydesktop.org/ -[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_1.jpg?itok=dalJk9Xf (default desktop) -[5]:/licenses/category/used-permission -[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_2.jpg?itok=GlouIm73 (Desktop Profiler) -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_3.jpg?itok=riSTP_1z (Desktop Profiler) -[10]:https://q4os.org/downloads2.html -[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/talk/20180220 4 considerations when naming software development projects.md b/sources/talk/20180220 4 considerations when naming software development projects.md deleted file mode 100644 index 1e1add0b68..0000000000 --- a/sources/talk/20180220 4 considerations when naming software development projects.md +++ /dev/null @@ -1,91 +0,0 @@ -4 considerations when naming software development projects -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb) - -Working on a new open source project, you're focused on the code—getting that great new idea released so you can share it with the world. And you'll want to attract new contributors, so you need a terrific **name** for your project. - -We've all read guides for creating names, but how do you go about choosing the right one? Keeping that cool science fiction reference you're using internally might feel fun, but it won't mean much to new users you're trying to attract. A better approach is to choose a name that's memorable to new users and developers searching for your project. - -Names set expectations. Your project's name should showcase its functionality in the ecosystem and explain to users what your story is. In the crowded open source software world, it's important not to get entangled with other projects out there. Taking a little extra time now, before sending out that big announcement, will pay off later. - -Here are four factors to keep in mind when choosing a name for your project. - -### What does your project's code do? - -Start with your project: What does it do? You know the code intimately—but can you explain what it does to a new developer? Can you explain it to a CTO or non-developer at another company? What kinds of problems does your project solve for users? - -Your project's name needs to reflect what it does in a way that makes sense to newcomers who want to use or contribute to your project. That means considering the ecosystem for your technology and understanding if there are any naming styles or conventions used for similar kinds of projects. Imagine that you're trying to evaluate someone else's project: Would the name be appealing to you? - -Any distribution channels you push to are also part of the ecosystem. If your code will be in a Linux distribution, [npm][1], [CPAN][2], [Maven][3], or in a Ruby Gem, you need to review any naming standards or common practices for that package manager. Review any similar existing names in that distribution channel, and get a feel for naming styles of other programs there. - -### Who are the users and developers you want to attract? - -The hardest aspect of choosing a new name is putting yourself in the shoes of new users. You built this project; you already know how powerful it is, so while your cool name may sound great, it might not draw in new people. You need a name that is interesting to someone new, and that tells the world what problems your project solves. - -Great names depend on what kind of users you want to attract. Are you building an [Eclipse][4] plugin or npm module that's focused on developers? Or an analytics toolkit that brings visualizations to the average user? Understanding your user base and the kinds of open source contributors you want to attract is critical. - -Great names depend on what kind of users you want to attract. - -Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to. - -Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to. - -When you're open source, this equation changes a bit—your target is not just users; it's also developers who will want to contribute code back to your project. You're probably a developer, too: What kinds of names and brands excite you, and what images would entice you to try out someone else's new project? - -Once you have a better feel of what users and potential contributors expect, use that knowledge to refine your names. Remember, you need to step outside your project and think about how the name would appeal to someone who doesn't know how amazing your code is—yet. Once someone gets to your website, does the name synchronize with what your product does? If so, move to the next step. - -### Who else is using similar names for software? - -Now that you've tried on a user's shoes to evaluate potential names, what's next? Figuring out if anyone else is already using a similar name. It sometimes feels like all the best names are taken—but if you search carefully, you'll find that's not true. - -The first step is to do a few web searches using your proposed name. Search for the name, plus "software", "open source", and a few keywords for the functionality that your code provides. Look through several pages of results for each search to see what's out there in the software world. - -The first step is to do a few web searches using your proposed name. - -Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one. - -Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one. - -Similar non-software product names are rarely an issue unless they are famous trademarks—like Nike or Red Bull, for example—where the companies behind them won't look kindly on anyone using a similar name. Using the same name as a less famous non-software product might be OK, depending on how big your project gets. - -### How big do you plan to grow your project? - -Are you building a new node module or command-line utility, but not planning a career around it? Is your new project a million-dollar business idea, and you're thinking startup? Or is it something in between? - -If your project is a basic developer utility—something useful that developers will integrate into their workflow—then you have enough data to choose a name. Think through the ecosystem and how a new user would see your potential names, and pick one. You don't need perfection, just a name you're happy with that seems right for your project. - -If you're planning to build a business around your project, use these tips to develop a shortlist of names, but do more vetting before announcing the winner. Use for a business or major project requires some level of registered trademark search, which is usually performed by a law firm. - -### Common pitfalls - -Finally, when choosing a name, avoid these common pitfalls: - - * Using an esoteric acronym. If new users don't understand the name, they'll have a hard time finding you. - - * Using current pop-culture references. If you want your project's appeal to last, pick a name that will last. - - * Failing to consider non-English speakers. Does the name have a specific meaning in another language that might be confusing? - - * Using off-color jokes or potentially unsavory references. Even if it seems funny to developers, it may fall flat for newcomers and turn away contributors. - - - - -Good luck—and remember to take the time to step out of your shoes and consider how a newcomer to your project will think of the name. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/choosing-project-names-four-key-considerations - -作者:[Shane Curcuru][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/shane-curcuru -[1]:https://www.npmjs.com/ -[2]:https://www.cpan.org/ -[3]:https://maven.apache.org/ -[4]:https://www.eclipse.org/ diff --git a/sources/talk/20180221 3 warning flags of DevOps metrics.md b/sources/talk/20180221 3 warning flags of DevOps metrics.md deleted file mode 100644 index a103a2bbca..0000000000 --- a/sources/talk/20180221 3 warning flags of DevOps metrics.md +++ /dev/null @@ -1,42 +0,0 @@ -3 warning flags of DevOps metrics -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) - -Metrics. Measurements. Data. Monitoring. Alerting. These are all big topics for DevOps and for cloud-native infrastructure and application development more broadly. In fact, acm Queue, a magazine published by the Association of Computing Machinery, recently devoted an [entire issue][1] to the topic. - -I've argued before that we conflate a lot of things under the "metrics" term, from key performance indicators to critical failure alerts to data that may be vaguely useful someday for something or other. But that's a topic for another day. What I want to discuss here is how metrics affect behavior. - -In 2008, Daniel Ariely published [Predictably Irrational][2] , one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: "Human beings adjust behavior based on the metrics they're held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you'll get. Period." - -This shouldn't be surprising. It's a finding that's been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It's certainly not news to anyone in sales management, for example. Base sales reps' (or their managers'!) bonuses solely on revenue, and they'll discount whatever it takes to maximize revenue even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the [spiffs][3]? Probably not happening. - -And lest you think I'm unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in [a 2010 Harvard Business Review article][4]. "CEOs care about stock value because that's how we measure them. If we want to change what they care about, we should change what we measure," writes Ariely. - -Think developers and operations folks are immune from such behaviors? Think again. Let's consider some problematic measurements. They're not all bad or wrong but, if you rely too much on them, warning flags should go up. - -### Three warning signs for DevOps metrics - -First, there are the quantity metrics. Lines of code or bugs fixed are perhaps self-evidently absurd. But there are also the deployments per week or per month that are so widely quoted to illustrate DevOps velocity relative to more traditional development and deployment practices. Speed is good. It's one of the reasons you're probably doing DevOps—but don't reward people on it excessively relative to quality and other measures. - -Second, it's obvious that you want to reward individuals who do their work quickly and well. Yes. But. Whether it's your local pro sports team or some project team you've been on, you can probably name someone who was really a talent, but was just so toxic and such a distraction for everyone else that they were a net negative for the team. Moral: Don't provide incentives that solely encourage individual behaviors. You may also want to put in place programs, such as peer rewards, that explicitly value collaboration. [As Red Hat's Jen Krieger told me][5] in a podcast last year: "Having those automated pots of awards, or some sort of system that's tracked for that, can only help teams feel a little more cooperative with one another as in, 'Hey, we're all working together to get something done.'" - -The third red flag area is incentives that don't actually incent because neither the individual nor the team has a meaningful ability to influence the outcome. It's often a good thing when DevOps metrics connect to business goals and outcomes. For example, customer ticket volume relates to perceived shortcomings in applications and infrastructure. And it's also a reasonable proxy for overall customer satisfaction, which certainly should be of interest to the executive suite. The best reward systems to drive DevOps behaviors should be tied to specific individual and team actions as opposed to just company success generally. - -You've probably noticed a common theme. That theme is balance. Velocity is good but so is quality. Individual achievement is good but not when it damages the effectiveness of the team. The overall success of the business is certainly important, but the best reward systems also tie back to actions and behaviors within development and operations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/three-warning-flags-devops-metrics - -作者:[Gordon Haff][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ghaff -[1]:https://queue.acm.org/issuedetail.cfm?issue=3178368 -[2]:https://en.wikipedia.org/wiki/Predictably_Irrational -[3]:https://en.wikipedia.org/wiki/Spiff -[4]:https://hbr.org/2010/06/column-you-are-what-you-measure -[5]:http://bitmason.blogspot.com/2015/09/podcast-making-devops-succeed-with-red.html diff --git a/sources/talk/20180222 3 reasons to say -no- in DevOps.md b/sources/talk/20180222 3 reasons to say -no- in DevOps.md deleted file mode 100644 index 5f27fbaf47..0000000000 --- a/sources/talk/20180222 3 reasons to say -no- in DevOps.md +++ /dev/null @@ -1,105 +0,0 @@ -3 reasons to say 'no' in DevOps -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_DesirePath.png?itok=N_zLVWlK) - -DevOps, it has often been pointed out, is a culture that emphasizes mutual respect, cooperation, continual improvement, and aligning responsibility with authority. - -Instead of saying no, it may be helpful to take a hint from improv comedy and say, "Yes, and..." or "Yes, but...". This opens the request from the binary nature of "yes" and "no" toward having a nuanced discussion around priority, capacity, and responsibility. - -However, sometimes you have no choice but to give a hard "no." These should be rare and exceptional, but they will occur. - -### Protecting yourself - -Both Agile and DevOps have been touted as ways to improve value to the customer and business, ultimately leading to greater productivity. While reasonable people can understand that the improvements will take time to yield, and the improvements will result in higher quality of work being done, and a better quality of life for those performing it, I think we can all agree that not everyone is reasonable. The less understanding that a person has of the particulars of a given task, the more likely they are to expect that it is a combination of "simple" and "easy." - -"You told me that [Agile/DevOps] is supposed to be all about us getting more productivity. Since we're doing [Agile/DevOps] now, you can take care of my need, right?" - -Like "Agile," some people have tried to use "DevOps" as a stick to coerce people to do more work than they can handle. Whether the person confronting you with this question is asking in earnest or is being manipulative doesn't really matter. - -The biggest areas of concern for me have been **capacity** , **firefighting/maintenance** , **level of quality** , and **" future me."** Many of these ultimately tie back to capacity, but they relate to a long-term effort in different respects. - -#### Capacity - -Capacity is simple: You know what your workload is, and how much flex occurs due to the unexpected. Exceeding your capacity will not only cause undue stress, but it could decrease the quality of your work and can injure your reputation with regards to making commitments. - -There are several avenues of discussion that can happen from here. The simplest is "Your request is reasonable, but I don't have the capacity to work on it." This seldom ends the conversation, and a discussion will often run up the flagpole to clarify priorities or reassign work. - -#### Firefighting/maintenance - -It's possible that the thing that you're being asked for won't take long to do, but it will require maintenance that you'll be expected to perform, including keeping it alive and fulfilling requests for it on behalf of others. - -An example in my mind is the Jenkins server that you're asked to stand up for someone else, but somehow end up being the sole owner and caretaker of. Even if you're careful to scope your level of involvement early on, you might be saddled with responsibility that you did not agree to. Should the service become unavailable, for example, you might be the one who is called. You might be called on to help triage a build that is failing. This is additional firefighting and maintenance work that you did not sign up for and now must fend off. - -This needs to be addressed as soon and publicly as possible. I'm not saying that (again, for example) standing up a Jenkins instance is a "no," but rather a ["Yes, but"][1]—where all parties understand that they take on the long-term care, feeding, and use of the product. Make sure to include all your bosses in this conversation so they can have your back. - -#### Level of quality - -There may be times when you are presented with requirements that include a timeframe that is...problematic. Perhaps you could get a "minimum (cough) viable (cough) product" out in that time. But it wouldn't be resilient or in any way ready for production. It might impact your time and productivity. It could end up hurting your reputation. - -The resulting conversation can get into the weeds, with lots of horse-trading about time and features. Another approach is to ask "What is driving this deadline? Where did that timeframe come from?" Discussing the bigger picture might lead to a better option, or that the timeline doesn't depend on the original date. - -#### Future me - -Ultimately, we are trying to protect "future you." These are lessons learned from the many times that "past me" has knowingly left "current me" to clean up. Sometimes we joke that "that's a problem for 'future me,'" but don't forget that 'future you' will just be 'you' eventually. I've cursed "past me" as a jerk many times. Do your best to keep other people from making "past you" be a jerk to "future you." - -I recognize that I have a significant amount of privilege in this area, but if you are told that you cannot say "no" on behalf of your own welfare, you should consider whether you are respected enough to maintain your autonomy. - -### Protecting the user experience - -Everyone should be an advocate for the user. Regardless of whether that user is right next to you, someone down the hall, or someone you have never met and likely never will, you must care for the customer. - -Behavior that is actively hostile to the user—whether it's a poor user experience or something more insidious like quietly violating reasonable expectations of privacy—deserves a "no." A common example of this would be automatically including people into a service or feature, forcing them to explicitly opt-out. - -If a "no" is not welcome, it bears considering, or explicitly asking, what the company's relationship with its customers is, who the company thinks of as it's customers, and what it thinks of them. - -When bringing up your objections, be clear about what they are. Additionally, remember that your coworkers are people too, and make it clear that you are not attacking their character; you simply find the idea disagreeable. - -### Legal, ethical, and moral grounds - -There might be situations that don't feel right. A simple test is to ask: "If this were to become public, or come up in a lawsuit deposition, would it be a scandal?" - -#### Ethics and morals - -If you are asked to lie, that should be a hard no. - -Remember if you will the Volkswagen Emissions Scandal of 2017? The emissions systems software was written such that it recognized that the vehicle was operated in a manner consistent with an emissions test, and would run more efficiently than under normal driving conditions. - -I don't know what you do in your job, or what your office is like, but I have a hard time imagining the Individual Contributor software engineer coming up with that as a solution on their own. In fact, I imagine a comment along the lines of "the engine engineers can't make their product pass the tests, so I need to hack the performance so that it will!" - -When the Volkswagen scandal came public, Volkswagen officials blamed the engineers. I find it unlikely that it came from the mind and IDE of an individual software engineer. Rather, it's more likely indicates significant systemic problems within the company culture. - -If you are asked to lie, get the request in writing, citing that the circumstances are suspect. If you are so privileged, decide whether you may decline the request on the basis that it is fundamentally dishonest and hostile to the customer, and would break the public's trust. - -#### Legal - -I am not a lawyer. If your work should involve legal matters, including requests from law enforcement, involve your company's legal counsel or speak with a private lawyer. - -With that said, if you are asked to provide information for law enforcement, I believe that you are within your rights to see the documentation that justifies the request. There should be a signed warrant. You should be provided with a copy of it, or make a copy of it yourself. - -When in doubt, begin recording and request legal counsel. - -It has been well documented that especially in the early years of the U.S. Patriot Act, law enforcement placed so many requests of telecoms that they became standard work, and the paperwork started slipping. While tedious and potentially stressful, make sure that the legal requirements for disclosure are met. - -If for no other reason, we would not want the good work of law enforcement to be put at risk because key evidence was improperly acquired, making it inadmissible. - -### Wrapping up - -You are going to be your single biggest advocate. There may be times when you are asked to compromise for the greater good. However, you should feel that your dignity is preserved, your autonomy is respected, and that your morals remain intact. - -If you don't feel that this is the case, get it on record, doing your best to communicate it calmly and clearly. - -Nobody likes being declined, but if you don't have the ability to say no, there may be a bigger problem than your environment not being DevOps. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/3-reasons-say-no-devops - -作者:[H. "Waldo" Grunenwal][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/gwaldo -[1]:http://gwaldo.blogspot.com/2015/12/fear-and-loathing-in-systems.html diff --git a/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md b/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md deleted file mode 100644 index 583714836e..0000000000 --- a/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md +++ /dev/null @@ -1,123 +0,0 @@ -Plasma Mobile Could Give Life to a Mobile Linux Experience -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/plasma-mobile_0.png?itok=uUIQFRcm) - -In the past few years, it’s become clear that, outside of powering Android, Linux on mobile devices has been a resounding failure. Canonical came close, even releasing devices running Ubuntu Touch. Unfortunately, the idea of [Scopes][1]was doomed before it touched down on its first piece of hardware and subsequently died a silent death. - -The next best hope for mobile Linux comes in the form of the [Samsung DeX][2] program. With DeX, users will be able to install an app (Linux On Galaxy—not available yet) on their Samsung devices, which would in turn allow them to run a full-blown Linux distribution. The caveat here is that you’ll be running both Android and Linux at the same time—which is not exactly an efficient use of resources. On top of that, most Linux distributions aren’t designed to run on such small form factors. The good news for DeX is that, when you run Linux on Galaxy and dock your Samsung device to DeX, that Linux OS will be running on your connected monitor—so form factor issues need not apply. - -Outside of those two options, a pure Linux on mobile experience doesn’t exist. Or does it? - -You may have heard of the [Purism Librem 5][3]. It’s a crowdfunded device that promises to finally bring a pure Linux experience to the mobile landscape. This device will be powered by a i.MX8 SoC chip, so it should run most any Linux operating system. - -Out of the box, the device will run an encrypted version of [PureOS][4]. However, last year Purism and KDE joined together to create a mobile version of the KDE desktop that could run on the Librem 5. Recently [ISOs were made available for a beta version of Plasma Mobile][5] and, judging from first glance, they’re onto something that makes perfect sense for a mobile Linux platform. I’ve booted up a live instance of Plasma Mobile to kick the tires a bit. - -What I saw seriously impressed me. Let’s take a look. - -### Testing platform - -Before you download the ISO and attempt to fire it up as a VirtualBox VM, you should know that it won’t work well. Because Plasma Mobile uses Wayland (and VirtualBox has yet to play well with that particular X replacement), you’ll find VirtualBox VM a less-than-ideal platform for the beta release. Also know that the Calamares installer doesn’t function well either. In fact, I have yet to get the OS installed on a non-mobile device. And since I don’t own a supported mobile device, I’ve had to run it as a live session on either a laptop or an [Antsle][6] antlet VM every time. - -### What makes Plasma Mobile special? - -This could be easily summed up by saying, Plasma Mobile got it all right. Instead of Canonical re-inventing a perfectly functioning wheel, the developers of KDE simply re-tooled the interface such that a full-functioning Linux distribution (complete with all the apps you’ve grown to love and depend upon) could work on a smaller platform. And they did a spectacular job. Even better, they’ve created an interface that any user of a mobile device could instantly feel familiar with. - -What you have with the Plasma Mobile interface (Figure 1) are the elements common to most Android home screens: - - * Quick Launchers - - * Notification Shade - - * App Drawer - - * Overview button (so you can go back to a previously used app, still running in memory) - - * Home button - - - - -![KDE mobile][8] - -Figure 1: The Plasma Mobile desktop interface. - -[Used with permission][9] - -Because KDE went this route with the UX, it means there’s zero learning curve. And because this is an actual Linux platform, it takes that user-friendly mobile interface and overlays it onto a system that allows for easy installation and usage of apps like: - - * GIMP - - * LibreOffice - - * Audacity - - * Clementine - - * Dropbox - - * And so much more - - - - -Unfortunately, without being able to install Plasma Mobile, you cannot really kick the tires too much, as the live user doesn’t have permission to install applications. However, once Plasma Mobile is fully installed, the Discover software center will allow you to install a host of applications (Figure 2). - - -![Discover center][11] - -Figure 2: The Discover software center on Plasma Mobile. - -[Used with permission][9] - -Swipe up (or scroll down—depending on what hardware you’re using) to reveal the app drawer, where you can launch all of your installed applications (Figure 3). - -![KDE mobile][13] - -Figure 3: The Plasma Mobile app drawer ready to launch applications. - -[Used with permission][9] - -Open up a terminal window and you can take care of standard Linux admin tasks, such as using SSH to log into a remote server. Using apt, you can install all of the developer tools you need to make Plasma Mobile a powerful development platform. - -We’re talking serious mobile power—either from a phone or a tablet. - -### A ways to go - -Clearly Plasma Mobile is still way too early in development for it to be of any use to the average user. And because most virtual machine technology doesn’t play well with Wayland, you’re likely to get too frustrated with the current ISO image to thoroughly try it out. However, even without being able to fully install the platform (or get full usage out of it), it’s obvious KDE and Purism are going to have the ideal platform that will put Linux into the hands of mobile users. - -If you want to test the waters of Plasma Mobile on an actual mobile device, a handy list of supported hardware can be found [here][14] (for PostmarketOS) or [here][15] (for Halium). If you happen to be lucky enough to have a device that also includes Wi-Fi support, you’ll find you get more out of testing the environment. - -If you do have a supported device, you’ll need to use either [PostmarketOS][16] (a touch-optimized, pre-configured Alpine Linux that can be installed on smartphones and other mobile devices) or [Halium][15] (an application that creates an minimal Android layer which allows a new interface to interact with the Android kernel). Using Halium further limits the number of supported devices, as it has only been built for select hardware. However, if you’re willing, you can build your own Halium images (documentation for this process is found [here][17]). If you want to give PostmarketOS a go, [here are the necessary build instructions][18]. - -Suffice it to say, Plasma Mobile isn’t nearly ready for mass market. If you’re a Linux enthusiast and want to give it a go, let either PostmarketOS or Halium help you get the operating system up and running on your device. Otherwise, your best bet is to wait it out and hope Purism and KDE succeed in bringing this oustanding mobile take on Linux to the masses. - -Learn more about Linux through the free ["Introduction to Linux" ][19]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/2/plasma-mobile-could-give-life-mobile-linux-experience - -作者:[JACK WALLEN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://launchpad.net/unity-scopes -[2]:http://www.samsung.com/global/galaxy/apps/samsung-dex/ -[3]:https://puri.sm/shop/librem-5/ -[4]:https://www.pureos.net/ -[5]:http://blog.bshah.in/2018/01/26/trying-out-plasma-mobile/ -[6]:https://antsle.com/ -[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_1.jpg?itok=EK3_vFVP (KDE mobile) -[9]:https://www.linux.com/licenses/category/used-permission -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_2.jpg?itok=CiUQ-MnB (Discover center) -[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_3.jpg?itok=i6V8fgK8 (KDE mobile) -[14]:http://blog.bshah.in/2018/02/02/trying-out-plasma-mobile-part-two/ -[15]:https://github.com/halium/projectmanagement/issues?q=is%3Aissue+is%3Aopen+label%3APorts -[16]:https://postmarketos.org/ -[17]:http://docs.halium.org/en/latest/ -[18]:https://wiki.postmarketos.org/wiki/Installation_guide -[19]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md b/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md deleted file mode 100644 index 8fe1b6f273..0000000000 --- a/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md +++ /dev/null @@ -1,91 +0,0 @@ -Why culture is the most important issue in a DevOps transformation -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY) - -You've been appointed the DevOps champion in your organisation: congratulations. So, what's the most important issue that you need to address? - -It's the technology—tools and the toolchain—right? Everybody knows that unless you get the right tools for the job, you're never going to make things work. You need integration with your existing stack (though whether you go with tight or loose integration will be an interesting question), a support plan (vendor, third party, or internal), and a bug-tracking system to go with your source code management system. And that's just the start. - -No! Don't be ridiculous: It's clearly the process that's most important. If the team doesn't agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you'll never be able to institute a consistent, repeatable working pattern. - -In fact, although both the technology and the process are important, there's a third component that is equally important, but typically even harder to get right: culture. Yup, it's that touch-feely thing we techies tend to struggle with.1 - -### Culture - -I was visiting a midsized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO. We were ushered into the CEO's office and waited for a while as the two of them finished participating in the daily stand-up. They apologised for being a minute or two late, but far from being offended, I was impressed. Here was an organisation where the culture of participation was clearly infused all the way up to the top. - -Not that culture can be imposed from the top—nor can you rely on it percolating up from the bottom3—but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it. If you can get management to buy into the process—and be seen buying in—you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it. - -So let's assume management believes you should give DevOps a go. Where do you start? - -Developers may well be your easiest target group. They are often keen to try new things and find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies. DevOps arguably has been driven mainly by the development community. - -But you shouldn't assume all developers will be keen to embrace this change. For some, the way things have always been done—your Rick Parfitts of dev, if you will7—is fine. Finding ways to help them work efficiently in the new world is part of your job, not just theirs. If you have superstar developers who aren't happy with change, you risk alienating and losing them if you try to force them into your brave new world. What's worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren't going to change if it makes their lives more difficult and reduces their productivity. - -Maybe you're not going to be able to move all the systems and people to DevOps immediately. Maybe you're going to need to choose which apps start with and who will be your first DevOps champions. Maybe it's time to move slowly. - -### Not maybe: definitely - -No—I lied. You're definitely going to need to move slowly. Trying to change everything at once is a recipe for disaster. - -This goes for all elements of the change—which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose—bar one. For those elements, if you try to move everything in one go, you will fail. You'll fail for a number of reasons. You'll fail for reasons I can't imagine and, more importantly, for reasons you can't imagine. But some of the reasons will include: - - * People—most people—don't like change. - * Technologies don't like change (you can't just switch and expect everything to still work). - * Applications don't like change (things worked before, or at least failed in known ways). You want to change everything in one go? Well, they'll all fail in new and exciting9 ways. - * Users don't like change. - * Use cases don't like change. - - - -### The one exception - -You noticed I wrote "bar one" when discussing which elements you shouldn't choose to change all in one go? Well done. - -What's that exception? It's the initial team. When you choose your initial application to change and you're thinking about choosing the team to make that change, select the members carefully and select a complete set. This is important. If you choose just developers, just test folks, just security folks, just ops folks, or just management—if you leave out one functional group from your list—you won't have proved anything at all. Well, you might have proved to a small section of your community that it kind of works, but you'll have missed out on a trick. And that trick is: If you choose keen people from across your functional groups, it's much harder to fail. - -Say your first attempt goes brilliantly. How are you going to convince other people to replicate your success and adopt DevOps? Well, the company newsletter, of course. And that will convince how many people, exactly? Yes, that number.12 If, on the other hand, you have team members from across the functional parts or the organisation, when you succeed, they'll tell their colleagues and you'll get more buy-in next time. - -If it fails, if you've chosen your team wisely—if they're all enthusiastic and know that "fail often, fail fast" is good—they'll be ready to go again. - -Therefore, you need to choose enthusiasts from across your functional groups. They can work on the technologies and the process, and once that's working, it's the people who will create that cultural change. You can just sit back and enjoy. Until the next crisis, of course. - -1\. OK, you're right. It should be "with which we techies tend to struggle."2 - -2\. You thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn't you? Read it again: I put "tend to." That's the best you're getting. - -3\. Is percolating a bottom-up process? I don't drink coffee,4 so I wouldn't know. - -4\. Do people even use percolators to make coffee anymore? Feel free to let me know in the comments. I may pretend interest if you're lucky. - -5\. For U.S. readers (and some other countries, maybe?), please substitute "check" for "tick" here.6 - -6\. For U.S. techie readers, feel free to perform `s/tick/check/;`. - -7\. This is a Status Quo8 reference for which I'm extremely sorry. - -8\. For millennial readers, please consult your favourite online reference engine or just roll your eyes and move on. - -9\. For people who say, "but I love excitement," try being on call at 2 a.m. on a Sunday at the end of the quarter when your chief financial officer calls you up to ask why all of last month's sales figures have been corrupted with the letters "DEADBEEF."10 - -10\. For people not in the know, this is a string often used by techies as test data because a) it's non-numerical; b) it's numerical (in hexadecimal); c) it's easy to search for in debug files; and d) it's funny.11 - -11\. Though see.9 - -12\. It's a low number, is all I'm saying. - -This article originally appeared on [Alice, Eve, and Bob – a security blog][1] and is republished with permission. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/most-important-issue-devops-transformation - -作者:[Mike Bursell][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mikecamel -[1]:https://aliceevebob.com/2018/02/06/moving-to-devops-whats-most-important/ diff --git a/sources/talk/20180301 How to hire the right DevOps talent.md b/sources/talk/20180301 How to hire the right DevOps talent.md deleted file mode 100644 index bcf9bb3d20..0000000000 --- a/sources/talk/20180301 How to hire the right DevOps talent.md +++ /dev/null @@ -1,48 +0,0 @@ -How to hire the right DevOps talent -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6) - -DevOps culture is quickly gaining ground, and demand for top-notch DevOps talent is greater than ever at companies all over the world. With the [annual base salary for a junior DevOps engineer][1] now topping $100,000, IT professionals are hurrying to [make the transition into DevOps.][2] - -But how do you choose the right candidate to fill your DevOps role? - -### Overview - -Most teams are looking for candidates with a background in operations and infrastructure, software engineering, or development. This is in conjunction with skills that relate to configuration management, continuous integration, and deployment (CI/CD), as well as cloud infrastructure. Knowledge of container orchestration is also in high demand. - -In a perfect world, the two backgrounds would meet somewhere in the middle to form Dev and Ops, but in most cases, candidates lean toward one side or the other. Yet they must possess the skills necessary to understand the needs of their counterparts to work effectively as a team to achieve continuous delivery and deployment. Since every company is different, there is no single right or wrong since so much depends on a company’s tech stack and infrastructure, as well as the goals and the skills of other team members. So how do you focus your search? - -### Decide on the background - -Begin by assessing the strength of your current team. Do you have rock-star software engineers but lack infrastructure knowledge? Focus on closing the skill gaps. Just because you have the budget to hire a DevOps engineer doesn’t mean you should spend weeks, or even months, trying to find the best software engineer who also happens to use Kubernetes and Docker because they are currently the trend. Instead, look for someone who will provide the most value in your environment, and see how things go from there. - -### There is no “Ctrl + F” solution - -Instead of concentrating on specific tools, concentrate on a candidate's understanding of DevOps and CI/CD-related processes. You'll be better off with someone who understands methodologies over tools. It is more important to ensure that candidates comprehend the concept of CI/CD than to ask if they prefer Jenkins, Bamboo, or TeamCity. Don’t get too caught up in the exact toolchain—rather, focus on problem-solving skills and the ability to increase efficiency, save time, and automate manual processes. You don't want to miss out on the right candidate just because the word “Puppet” was not on their resume. - -### Check your ego - -As mentioned above, DevOps is a rapidly growing field, and DevOps engineers are in hot demand. That means candidates have great buying power. You may have an amazing company or product, but hiring top talent is no longer as simple as putting up a “Help Wanted” sign and waiting for top-quality applicants to rush in. I'm not suggesting that maintaining a reputation a great place to work is unimportant, but in today's environment, you need to make an effort to sell your position. Flaws or glitches in the hiring process, such as abruptly canceling interviews or not offering feedback after interviews, can lead to negative reviews spreading across the industry. Remember, it takes just a couple of minutes to leave a negative review on Glassdoor. - -### Contractor or permanent employee? - -Most recruiters and hiring managers immediately start searching for a full-time employee, even though they may have other options. If you’re looking to design, build, and implement a new DevOps environment, why not hire a senior person who has done this in the past? Consider hiring a senior contractor, along with a junior full-time hire. That way, you can tap the knowledge and experience of the contractor by having them work with the junior employee. Contractors can be expensive, but they bring invaluable knowledge—especially if the work can be done within a short timeframe. - -### Cultivate from within - -With so many companies competing for talent, it is difficult to find the right DevOps engineer. Not only will you need to pay top dollar to hire this person, but you must also consider that the search can take several months. However, since few companies are lucky enough to find the ideal DevOps engineer, consider searching for a candidate internally. You might be surprised at the talent you can cultivate from within your own organization. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/how-hire-right-des-talentvop - -作者:[Stanislav Ivaschenko][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ilyadudkin -[1]:https://www.glassdoor.com/Salaries/junior-devops-engineer-salary-SRCH_KO0,22.htm -[2]:https://squadex.com/insights/system-administrator-making-leap-devops/ diff --git a/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md b/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md deleted file mode 100644 index fb5454bbe4..0000000000 --- a/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md +++ /dev/null @@ -1,53 +0,0 @@ -Beyond metrics: How to operate as team on today's open source project -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w) - -How do we traditionally think about community health and vibrancy? - -We might quickly zero in on metrics related primarily to code contributions: How many companies are contributing? How many individuals? How many lines of code? Collectively, these speak to both the level of development activity and the breadth of the contributor base. The former speaks to whether the project continues to be enhanced and expanded; the latter to whether it has attracted a diverse group of developers or is controlled primarily by a single organization. - -The [Linux Kernel Development Report][1] tracks these kinds of statistics and, unsurprisingly, it appears extremely healthy on all counts. - -However, while development cadence and code contributions are still clearly important, other aspects of the open source communities are also coming to the forefront. This is in part because, increasingly, open source is about more than a development model. It’s also about making it easier for users and other interested parties to interact in ways that go beyond being passive recipients of code. Of course, there have long been user groups. But open source streamlines the involvement of users, just as it does software development. - -This was the topic of my discussion with Diane Mueller, the director of community development for OpenShift. - -When OpenShift became a container platform based in part on Kubernetes in version 3, Mueller saw a need to broaden the community beyond the core code contributors. In part, this was because OpenShift was increasingly touching a broad range of open source projects and organizations such those associated with the [Open Container Initiative (OCI)][2] and the [Cloud Native Computing Foundation (CNCF)][3]. In addition to users, cloud service providers who were offering managed services also wanted ways to get involved in the project. - -“What we tried to do was open up our minds about what the community constituted,” Mueller explained, adding, “We called it the [Commons][4] because Red Hat's near Boston, and I'm from that area. Boston Common is a shared resource, the grass where you bring your cows to graze, and you have your farmer's hipster market or whatever it is today that they do on Boston Common.” - -This new model, she said, was really “a new ecosystem that incorporated all of those different parties and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.” - -![diane mueller open shift][6] - -Diane Mueller, director of community development at Open Shift, discusses the role of strong user communities in open source software development. (Credit: Gordon Haff, CC BY-SA 4.0) - -However, one interesting aspect of the Commons model is that it isn’t just virtual. We see the same pattern elsewhere in many successful open source communities, such as the Linux kernel. Lots of day-to-day activities happen on mailings lists, IRC, and other collaboration tools. But this doesn’t eliminate the benefits of face-to-face time that allows for both richer and informal discussions and exchanges. - -This interview with Mueller took place in London the day after the [OpenShift Commons Gathering][7]. Gatherings are full-day events, held a number of times a year, which are typically attended by a few hundred people. Much of the focus is on users and user stories. In fact, Mueller notes, “Here in London, one of the Commons members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons members.” - -Mueller wants to focus on users more and more. “The OpenShift Commons gathering at [Red Hat] Summit will be almost entirely case studies,” she noted. “Users talking about what's in their stack. What lessons did they learn? What are the best practices? Sharing those ideas that they've done just like we did here in London.” - -Although the Commons model grew out of some specific OpenShift needs at the time it was created, Mueller believes it’s an approach that can be applied more broadly. “I think if you abstract what we've done, you can apply it to any existing open source community,” she said. “The foundations still, in some ways, play a nice role in giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways. I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away.” - -The evolution of the community model though approaches like the OpenShift Commons mirror the healthy evolution of open source more broadly. Certainly, some users have been involved in the development of open source software for a long time. What’s striking today is how widespread and pervasive direct user participation has become. Sure, open source remains central to much of modern software development. But it’s also becoming increasingly central to how users learn from each other and work together with their partners and developers. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/how-communities-are-evolving - -作者:[Gordon Haff][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ghaff -[1]:https://www.linuxfoundation.org/2017-linux-kernel-report-landing-page/ -[2]:https://www.opencontainers.org/ -[3]:https://www.cncf.io/ -[4]:https://commons.openshift.org/ -[5]:/file/388586 -[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/39369010275_7df2c3c260_z.jpg?itok=gIhnBl6F (diane mueller open shift) -[7]:https://www.meetup.com/London-OpenShift-User-Group/events/246498196/ diff --git a/sources/talk/20180303 4 meetup ideas- Make your data open.md b/sources/talk/20180303 4 meetup ideas- Make your data open.md deleted file mode 100644 index a431b8376a..0000000000 --- a/sources/talk/20180303 4 meetup ideas- Make your data open.md +++ /dev/null @@ -1,75 +0,0 @@ -4 meetup ideas: Make your data open -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK) - -[Open Data Day][1] (ODD) is an annual, worldwide celebration of open data and an opportunity to show the importance of open data in improving our communities. - -Not many individuals and organizations know about the meaningfulness of open data or why they might want to liberate their data from the restrictions of copyright, patents, and more. They also don't know how to make their data open—that is, publicly available for anyone to use, share, or republish with modifications. - -This year ODD falls on Saturday, March 3, and there are [events planned][2] in every continent except Antarctica. While it might be too late to organize an event for this year, it's never too early to plan for next year. Also, since open data is important every day of the year, there's no reason to wait until ODD 2019 to host an event in your community. - -There are many ways to build local awareness of open data. Here are four ideas to help plan an excellent open data event any time of year. - -### 1. Organize an entry-level event - -You can host an educational event at a local library, college, or another public venue about how open data can be used and why it matters for all of us. If possible, invite a [local speaker][3] or have someone present remotely. You could also have a roundtable discussion with several knowledgeable people in your community. - -Consider offering resources such as the [Open Data Handbook][4], which not only provides a guide to the philosophy and rationale behind adopting open data, but also offers case studies, use cases, how-to guides, and other material to support making data open. - -### 2. Organize an advanced-level event - -For a deeper experience, organize a hands-on training event for open data newbies. Ideas for good topics include [training teachers on open science][5], [creating audiovisual expressions from open data][6], and using [open government data][7] in meaningful ways. - -The options are endless. To choose a topic, think about what is locally relevant, identify issues that open data might be able to address, and find people who can do the training. - -### 3. Organize a hackathon - -Open data hackathons can be a great way to bring open data advocates, developers, and enthusiasts together under one roof. Hackathons are more than just training sessions, though; the idea is to build prototypes or solve real-life challenges that are tied to open data. In a hackathon, people in various groups can contribute to the entire assembly line in multiple ways, such as identifying issues by working collaboratively through [Etherpad][8] or creating focus groups. - -Once the hackathon is over, make sure to upload all the useful data that is produced to the internet with an open license. - -### 4. Release or relicense data as open - -Open data is about making meaningful data publicly available under open licenses while protecting any data that might put people's private information at risk. (Learn [how to protect private data][9].) Try to find existing, interesting, and useful data that is privately owned by individuals or organizations and negotiate with them to relicense or release the data online under any of the [recommended open data licenses][10]. The widely popular [Creative Commons licenses][11] (particularly the CC0 license and the 4.0 licenses) are quite compatible with relicensing public data. (See this FAQ from Creative Commons for more information on [openly licensing data][12].) - -Open data can be published on multiple platforms—your website, [GitHub][13], [GitLab][14], [DataHub.io][15], or anywhere else that supports open standards. - -### Tips for event success - -No matter what type of event you decide to do, here are some general planning tips to improve your chances of success. - - * Find a venue that's accessible to the people you want to reach, such as a library, a school, or a community center. - * Create a curriculum that will engage the participants. - * Invite your target audience—make sure to distribute information through social media, community events calendars, Meetup, and the like. - - - -Have you attended or hosted a successful open data event? If so, please share your ideas in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/celebrate-open-data-day - -作者:[Subhashish Panigraphi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/psubhashish -[1]:http://www.opendataday.org/ -[2]:http://opendataday.org/#map -[3]:https://openspeakers.org/ -[4]:http://opendatahandbook.org/ -[5]:https://docs.google.com/forms/d/1BRsyzlbn8KEMP8OkvjyttGgIKuTSgETZW9NHRtCbT1s/viewform?edit_requested=true -[6]:http://dattack.lv/en/ -[7]:https://www.eventbrite.co.nz/e/open-data-open-potential-event-friday-2-march-2018-tickets-42733708673 -[8]:http://etherpad.org/ -[9]:https://ssd.eff.org/en/module/keeping-your-data-safe -[10]:https://opendatacommons.org/licenses/ -[11]:https://creativecommons.org/share-your-work/licensing-types-examples/ -[12]:https://wiki.creativecommons.org/wiki/Data#Frequently_asked_questions_about_data_and_CC_licenses -[13]:https://github.com/MartinBriza/MediaWriter -[14]:https://about.gitlab.com/ -[15]:https://datahub.io/ diff --git a/sources/talk/20180314 How to apply systems thinking in DevOps.md b/sources/talk/20180314 How to apply systems thinking in DevOps.md deleted file mode 100644 index c35eb041bd..0000000000 --- a/sources/talk/20180314 How to apply systems thinking in DevOps.md +++ /dev/null @@ -1,89 +0,0 @@ -How to apply systems thinking in DevOps -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa) -For most organizations, adopting DevOps requires a mindset shift. Unless you understand the core of [DevOps][1], you might think it's hype or just another buzzword—or worse, you might believe you have already adopted DevOps because you are using the right tools. - -Let’s dig deeper into what DevOps means, and explore how to apply systems thinking in your organization. - -### What is systems thinking? - -Systems thinking is a holistic approach to problem-solving. It's the opposite of analytical thinking, which separates a problem from the "bigger picture" to better understand it. Instead, systems thinking studies all the elements of a problem, along with the interactions between these elements. - -Most people are not used to thinking this way. Since childhood, most of us were taught math, science, and every other subject separately, by different teachers. This approach to learning follows us throughout our lives, from school to university to the workplace. When we first join an organization, we typically work in only one department. - -Unfortunately, the world is not that simple. Complexity, unpredictability, and sometimes chaos are unavoidable and require a broader way of thinking. Systems thinking helps us understand the systems we are part of, which in turn enables us to manage them rather than be controlled by them. - -According to systems thinking, everything is a system: your body, your family, your neighborhood, your city, your company, and even the communities you belong to. These systems evolve organically; they are alive and fluid. The better you understand a system's behavior, the better you can manage and leverage it. You become their change agent and are accountable for them. - -### Systems thinking and DevOps - -All systems include properties that DevOps addresses through its practices and tools. Awareness of these properties helps us properly adapt to DevOps. Let's look at the properties of a system and how DevOps relates to each one. - -### How systems work - -The figure below represents a system. To reach a goal, the system requires input, which is processed and generates output. Feedback is essential for moving the system toward the goal. Without a purpose, the system dies. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/system.png?itok=UlqAf39I) - -If an organization is a system, its departments are subsystems. The flow of work moves through each department, starting with identifying a market need (the first input on the left) and moving toward releasing a solution that meets that need (the last output on the right). The output that each department generates serves as required input for the next department in the chain. - -The more specialized teams an organization has, the more handoffs happen between departments. The process of generating value to clients is more likely to create bottlenecks and thus it takes longer to deliver value. Also, when work is passed between teams, the gap between the goal and what has been done widens. - -DevOps aims to optimize the flow of work throughout the organization to deliver value to clients faster—in other words, DevOps reduces time to market. This is done in part by maximizing automation, but mainly by targeting the organization's goals. This empowers prioritization and reduces duplicated work and other inefficiencies that happen during the delivery process. - -### System deterioration - -All systems are affected by entropy. Nothing can prevent system degradation; that's irreversible. The tendency to decline shows the failure nature of systems. Moreover, systems are subject to threats of all types, and failure is a matter of time. - -To mitigate entropy, systems require constant maintenance and improvements. The effects of entropy can be delayed only when new actions are taken or input is changed. - -This pattern of deterioration and its opposite force, survival, can be observed in living organisms, social relationships, and other systems as well as in organizations. In fact, if an organization is not evolving, entropy is guaranteed to be increasing. - -DevOps attempts to break the entropy process within an organization by fostering continuous learning and improvement. With DevOps, the organization becomes fault-tolerant because it recognizes the inevitability of failure. DevOps enables a blameless culture that offers the opportunity to learn from failure. The [postmortem][2] is an example of a DevOps practice used by organizations that embrace inherent failure. - -The idea of intentionally embracing failure may sound counterintuitive, but that's exactly what happens in techniques like [Chaos Monkey][3]: Failure is intentionally introduced to improve availability and reliability in the system. DevOps suggests that putting some pressure into the system in a controlled way is not a bad thing. Like a muscle that gets stronger with exercise, the system benefits from the challenge. - -### System complexity - -The figure below shows how complex the systems can be. In most cases, one effect can have multiple causes, and one cause can generate multiple effects. The more elements and interactions a system has, the more complex the system. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/system-complexity.png?itok=GYZS00Lm) - -In this scenario, we can't immediately identify the reason for a particular event. Likewise, we can't predict with 100% certainty what will happen if a specific action is taken. We are constantly making assumptions and dealing with hypotheses. - -System complexity can be explained using the scientific method. In a recent study, for example, mice that were fed excess salt showed suppressed cerebral blood flow. This same experiment would have had different results if, say, the mice were fed sugar and salt. One variable can radically change results in complex systems. - -DevOps handles complexity by encouraging experimentation—for example, using the scientific method—and reducing feedback cycles. Smaller changes inserted into the system can be tested and validated more quickly. With a "[fail-fast][4]" approach, organizations can pivot quickly and achieve resiliency. Reacting rapidly to changes makes organizations more adaptable. - -DevOps also aims to minimize guesswork and maximize understanding by making the process of delivering value more tangible. By measuring processes, revealing flaws and advantages, and monitoring as much as possible, DevOps helps organizations discover the changes they need to make. - -### System limitations - -All systems have constraints that limit their performance; a system's overall capacity is delimited by its restrictions. Most of us have learned from experience that systems operating too long at full capacity can crash, and most systems work better when they function with some slack. Ignoring limitations puts systems at risk. For example, when we are under too much stress for a long time, we get sick. Similarly, overused vehicle engines can be damaged. - -This principle also applies to organizations. Unfortunately, organizations can't put everything into a system at once. Although this limitation may sometimes lead to frustration, the quality of work usually improves when input is reduced. - -Consider what happened when the speed limit on the main roads in São Paulo, Brazil was reduced from 90 km/h to 70 km/h. Studies showed that the number of accidents decreased by 38.5% and the average speed increased by 8.7%. In other words, the entire road system improved and more vehicles arrived safely at their destinations. - -For organizations, DevOps suggests global rather than local improvements. It doesn't matter if some improvement is put after a constraint because there's no effect on the system at all. One constraint that DevOps addresses, for instance, is dependency on specialized teams. DevOps brings to organizations a more collaborative culture, knowledge sharing, and cross-functional teams. - -### Conclusion - -Before adopting DevOps, understand what is involved and how you want to apply it to your organization. Systems thinking will help you accomplish that while also opening your mind to new possibilities. DevOps may be seen as a popular trend today, but in 10 or 20 years, it will be status quo. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/how-apply-systems-thinking-devops - -作者:[Gustavo Muniz do Carmo][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/gustavomcarmo -[1]:https://opensource.com/tags/devops -[2]:https://landing.google.com/sre/book/chapters/postmortem-culture.html -[3]:https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116 -[4]:https://en.wikipedia.org/wiki/Fail-fast diff --git a/sources/talk/20180315 6 ways a thriving community will help your project succeed.md b/sources/talk/20180315 6 ways a thriving community will help your project succeed.md deleted file mode 100644 index cf15b7f06f..0000000000 --- a/sources/talk/20180315 6 ways a thriving community will help your project succeed.md +++ /dev/null @@ -1,111 +0,0 @@ -6 ways a thriving community will help your project succeed -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_community_lead.jpg?itok=F9KKLI7x) -NethServer is an open source product that my company, [Nethesis][1], launched just a few years ago. [The product][2] wouldn't be [what it is today][3] without the vibrant community that surrounds and supports it. - -In my previous article, I [discussed what organizations should expect to give][4] if they want to experience the benefits of thriving communities. In this article, I'll describe what organizations should expect to receive in return for their investments in the passionate people that make up their communities. - -Let's review six benefits. - -### 1\. Innovation - -"Open innovation" occurs when a company sharing information also listens to the feedback and suggestions from outside the company. As a company, we don't just look at the crowd for ideas. We innovate in, with, and through communities. - -You may know that "[the best way to have a good idea is to have a lot of ideas][5]." You can't always expect to have the right idea on your own, so having different point of views on your product is essential. How many truly disruptive ideas can a small company (like Nethesis) create? We're all young, caucasian, and European—while in our community, we can pick up a set of inspirations from a variety of people, with different genders, backgrounds, skills, and ethnicities. - -So the ability to invite the entire world to continuously improve the product is now no longer a dream; it's happening before our eyes. Your community could be the idea factory for innovation. With the community, you can really leverage the power of the collective. - -No matter who you are, most of the smartest people work for someone else. And community is the way to reach those smart people and work with them. - -### 2\. Research - -A community can be your strongest source of valuable product research. - -First, it can help you avoid "ivory tower development." [As Stack Exchange co-founder Jeff Atwood has said][6], creating an environment where developers have no idea who the users are is dangerous. Isolated developers, who have worked for years in their high towers, often encounter bad results because they don't have any clue about how users actually use their software. Developing in an Ivory tower keeps you away from your users and can only lead to bad decisions. A community brings developers back to reality and helps them stay grounded. Gone are the days of developers working in isolation with limited resources. In this day and age, thanks to the advent of open source communities research department is opening up to the entire world. - -No matter who you are, most of the smartest people work for someone else. And community is the way to reach those smart people and work with them. - -Second, a community can be an obvious source of product feedback—always necessary as you're researching potential paths forward. If someone gives you feedback, it means that person cares about you. It's a big gift. The community is a good place to acquire such invaluable feedback. Receiving early feedback is super important, because it reduces the cost of developing something that doesn't work in your target market. You can safely fail early, fail fast, and fail often. - -And third, communities help you generate comparisons with other projects. You can't know all the features, pros, and cons of your competitors' offerings. [The community, however, can.][7] Ask your community. - -### 3\. Perspective - -Communities enable companies to look at themselves and their products [from the outside][8], letting them catch strengths and weaknesses, and mostly realize who their products' audiences really are. - -Let me offer an example. When we launched the NethServer, we chose a catchy tagline for it. We were all convinced the following sentence was perfect: - -> [NethServer][9] is an operating system for Linux enthusiasts, designed for small offices and medium enterprises. - -Two years have passed since then. And we've learned that sentence was an epic fail. - -We failed to realize who our audience was. Now we know: NethServer is not just for Linux enthusiasts; actually, Windows users are the majority. It's not just for small offices and medium enterprises; actually, several home users install NethServer for personal use. Our community helps us to fully understand our product and look at it from our users' eyes. - -### 4\. Development - -In open source communities especially, communities can be a welcome source of product development. - -They can, first of all, provide testing and bug reporting. In fact, if I ask my developers about the most important community benefit, they'd answer "testing and bug reporting." Definitely. But because your code is freely available to the whole world, practically anyone with a good working knowledge of it (even hobbyists and other companies) has the opportunity to play with it, tweak it, and constantly improve it (even develop additional modules, as in our case). People can do more than just report bugs; they can fix those bugs, too, if they have the time and knowledge. - -But the community doesn't just create code. It can also generate resources like [how-to guides,][10] FAQs, support documents, and case studies. How much would it cost to fully translate your product in seven different languages? At NethServer, we got that for free—thanks to our community members. - -### 5\. Marketing - -Communities can help your company go global. Our small Italian company, for example, wasn't prepared for a global market. The community got us prepared. For example, we needed to study and improve our English so we could read and write correctly or speak in public without looking foolish for an audience. The community gently forced us to organize [our first NethServer Conference][11], too—only in English. - -A strong community can also help your organization attain the holy grail of marketers everywhere: word of mouth marketing (or what Seth Godin calls "[tribal marketing][12]"). - -Communities ensure that your company's messaging travels not only from company to tribe but also "sideways," from tribe member to potential tribe member. The community will become your street team, spreading word of your organization and its projects to anyone who will listen. - -In addition, communities help organizations satisfy one of the most fundamental members needs: the desire to belong, to be involved in something bigger than themselves, and to change the world together. - -Never forget that working with communities is always a matter of giving and taking—striking a delicate balance between the company and the community. - -### 6\. Loyalty - -Attracting new users costs a business five times as much as keeping an existing one. So loyalty can have a huge impact on your bottom line. Quite simply, community helps us build brand loyalty. It's much more difficult to leave a group of people you're connected to than a faceless product or company. In a community, you're building connections with people, which is way more powerful than features or money (trust me!). - -### Conclusion - -Never forget that working with communities is always a matter of giving and taking—striking a delicate balance between the company and the community. - -And I wouldn't be honest with you if I didn't admit that the approach has some drawbacks. Doing everything in the open means moderating, evaluating, and processing of all the data you're receiving. Supporting your members and leading the discussions definitely takes time and resources. But, if you look at what a community enables, you'll see that all this is totally worth the effort. - -As my friend and mentor [David Spinks keeps saying over and over again][13], "Companies fail their communities when when they treat community as a tactic instead of making it a core part of their business philosophy." And [as I've said][4]: Communities aren't simply extensions of your marketing teams; "community" isn't an efficient short-term strategy. When community is a core part of your business philosophy, it can do so much more than give you short-term returns. - -At Nethesis we experience that every single day. As a small company, we could never have achieved the results we have without our community. Never. - -Community can completely set your business apart from every other company in the field. It can redefine markets. It can inspire millions of people, give them a sense of belonging, and make them feel an incredible bond with your company. - -And it can make you a whole lot of money. - -Community-driven companies will always win. Remember that. - -[Subscribe to our weekly newsletter][14] to learn more about open organizations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/18/3/why-build-community-3 - -作者:[Alessio Fattorini][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/alefattorini -[1]:http://www.nethesis.it/ -[2]:https://www.nethserver.org/ -[3]:https://distrowatch.com/table.php?distribution=nethserver -[4]:https://opensource.com/open-organization/18/2/why-build-community-2 -[5]:https://www.goodreads.com/author/quotes/52938.Linus_Pauling -[6]:https://blog.codinghorror.com/ivory-tower-development/ -[7]:https://community.nethserver.org/tags/comparison -[8]:https://community.nethserver.org/t/improve-our-communication/2569 -[9]:http://www.nethserver.org/ -[10]:https://community.nethserver.org/c/howto -[11]:https://community.nethserver.org/t/nethserver-conference-in-italy-sept-29-30-2017/6404 -[12]:https://www.ted.com/talks/seth_godin_on_the_tribes_we_lead -[13]:http://cmxhub.com/article/community-business-philosophy-tactic/ -[14]:https://opensource.com/open-organization/resources/newsletter diff --git a/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md b/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md deleted file mode 100644 index 6ae7cbea2c..0000000000 --- a/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md +++ /dev/null @@ -1,40 +0,0 @@ -Lessons Learned from Growing an Open Source Project Too Fast -====== -![open source project][1] - -Are you managing an open source project or considering launching one? If so, it may come as a surprise that one of the challenges you can face is rapid growth. Matt Butcher, Principal Software Development Engineer at Microsoft, addressed this issue in a presentation at Open Source Summit North America. His talk covered everything from teamwork to the importance of knowing your goals and sticking to them. - -Butcher is no stranger to managing open source projects. As [Microsoft invests more deeply into open source][2], Butcher has been involved with many projects, including toolkits for Kubernetes and QueryPath, the jQuery-like library for PHP. - -Butcher described a case study involving Kubernetes Helm, a package system for Kubernetes. Helm arose from a company team-building hackathon, with an original team of three people giving birth to it. Within 18 months, the project had hundreds of contributors and thousands of active users. - -### Teamwork - -“We were stretched to our limits as we learned to grow,” Butcher said. “When you’re trying to set up your team of core maintainers and they’re all trying to work together, you want to spend some actual time trying to optimize for a process that lets you be cooperative. You have to adjust some expectations regarding how you treat each other. When you’re working as a group of open source collaborators, the relationship is not employer/employee necessarily. It’s a collaborative effort.” - -In addition to focusing on the right kinds of teamwork, Butcher and his collaborators learned that managing governance and standards is an ongoing challenge. “You want people to understand who makes decisions, how they make decisions and why they make the decisions that they make,” he said. “When we were a small project, there might have been two paragraphs in one of our documents on standards, but as a project grows and you get growing pains, these documented things gain a life of their own. They get their very own repositories, and they just keep getting bigger along with the project.” - -Should all discussion surrounding a open source project go on in public, bathed in the hot lights of community scrutiny? Not necessarily, Butcher noted. “A minor thing can get blown into catastrophic proportions in a short time because of misunderstandings and because something that should have been done in private ended up being public,” he said. “Sometimes we actually make architectural recommendations as a closed group. The reason we do this is that we don’t want to miscue the community. The people who are your core maintainers are core maintainers because they’re experts, right? These are the people that have been selected from the community because they understand the project. They understand what people are trying to do with it. They understand the frustrations and concerns of users.” - -### Acknowledge Contributions - -Butcher added that it is essential to acknowledge people’s contributions to keep the environment surrounding a fast-growing project from becoming toxic. “We actually have an internal rule in our core maintainers guide that says, ‘Make sure that at least one comment that you leave on a code review, if you’re asking for changes, is a positive one,” he said. “It sounds really juvenile, right? But it serves a specific purpose. It lets somebody know, ‘I acknowledge that you just made a gift of your time and your resources.” - -Want more tips on successfully launching and managing open source projects? Stay tuned for more insight from Matt Butcher’s talk, in which he provides specific project management issues faced by Kubernetes Helm. - -For more information, be sure to check out [The Linux Foundation’s growing list of Open Source Guides for the Enterprise][3], covering topics such as starting an open source project, improving your open source impact, and participating in open source communities. - --------------------------------------------------------------------------------- - -via: https://www.linuxfoundation.org/blog/lessons-learned-from-growing-an-open-source-project-too-fast/ - -作者:[Sam Dean][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxfoundation.org/author/sdean/ -[1]:https://www.linuxfoundation.org/wp-content/uploads/2018/03/huskies-2279627_1920.jpg -[2]:https://thenewstack.io/microsoft-shifting-emphasis-open-source/ -[3]:https://www.linuxfoundation.org/resources/open-source-guides/ diff --git a/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md b/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md deleted file mode 100644 index e433e85d5f..0000000000 --- a/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md +++ /dev/null @@ -1,119 +0,0 @@ -How to avoid humiliating newcomers: A guide for advanced developers -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy) -Every year in New York City, a few thousand young men come to town, dress up like Santa Claus, and do a pub crawl. One year during this SantaCon event, I was walking on the sidewalk and minding my own business, when I saw an extraordinary scene. There was a man dressed up in a red hat and red jacket, and he was talking to a homeless man who was sitting in a wheelchair. The homeless man asked Santa Claus, "Can you spare some change?" Santa dug into his pocket and brought out a $5 bill. He hesitated, then gave it to the homeless man. The homeless man put the bill in his pocket. - -In an instant, something went wrong. Santa yelled at the homeless man, "I gave you $5. I wanted to give you one dollar, but five is the smallest I had, so you oughtta be grateful. This is your lucky day, man. You should at least say thank you!" - -This was a terrible scene to witness. First, the power difference was terrible: Santa was an able-bodied white man with money and a home, and the other man was black, homeless, and using a wheelchair. It was also terrible because Santa Claus was dressed like the very symbol of generosity! And he was behaving like Santa until, in an instant, something went wrong and he became cruel. - -This is not merely a story about Drunk Santa, however; this is a story about technology communities. We, too, try to be generous when we answer new programmers' questions, and every day our generosity turns to rage. Why? - -### My cruelty - -I'm reminded of my own bad behavior in the past. I was hanging out on my company's Slack when a new colleague asked a question. - -> **New Colleague:** Hey, does anyone know how to do such-and-such with MongoDB? -> **Jesse:** That's going to be implemented in the next release. -> **New Colleague:** What's the ticket number for that feature? -> **Jesse:** I memorize all ticket numbers. It's #12345. -> **New Colleague:** Are you sure? I can't find ticket 12345. - -He had missed my sarcasm, and his mistake embarrassed him in front of his peers. I laughed to myself, and then I felt terrible. As one of the most senior programmers at MongoDB, I should not have been setting this example. And yet, such behavior is commonplace among programmers everywhere: We get sarcastic with newcomers, and we humiliate them. - -### Why does it matter? - -Perhaps you are not here to make friends; you are here to write code. If the code works, does it matter if we are nice to each other or not? - -A few months ago on the Stack Overflow blog, David Robinson showed that [Python has been growing dramatically][1], and it is now the top language that people view questions about on Stack Overflow. Even in the most pessimistic forecast, it will far outgrow the other languages this year. - -![Projections for programming language popularity][2] - -If you are a Python expert, then the line surging up and to the right is good news for you. It does not represent competition, but confirmation. As more new programmers learn Python, our expertise becomes ever more valuable, and we will see that reflected in our salaries, our job opportunities, and our job security. - -But there is a danger. There are soon to be more new Python programmers than ever before. To sustain this growth, we must welcome them, and we are not always a welcoming bunch. - -### The trouble with Stack Overflow - -I searched Stack Overflow for rude answers to beginners' questions, and they were not hard to find. - -![An abusive answer on StackOverflow][3] - -The message is plain: If you are asking a question this stupid, you are doomed. Get out. - -I immediately found another example of bad behavior: - -![Another abusive answer on Stack Overflow][4] - -Who has never been confused by Unicode in Python? Yet the message is clear: You do not belong here. Get out. - -Do you remember how it felt when you needed help and someone insulted you? It feels terrible. And it decimates the community. Some of our best experts leave every day because they see us treating each other this way. Maybe they still program Python, but they are no longer participating in conversations online. This cruelty drives away newcomers, too, particularly members of groups underrepresented in tech who might not be confident they belong. People who could have become the great Python programmers of the next generation, but if they ask a question and somebody is cruel to them, they leave. - -This is not in our interest. It hurts our community, and it makes our skills less valuable because we drive people out. So, why do we act against our own interests? - -### Why generosity turns to rage - -There are a few scenarios that really push my buttons. One is when I act generously but don't get the acknowledgment I expect. (I am not the only person with this resentment: This is probably why Drunk Santa snapped when he gave a $5 bill to a homeless man and did not receive any thanks.) - -Another is when answering requires more effort than I expect. An example is when my colleague asked a question on Slack and followed-up with, "What's the ticket number?" I had judged how long it would take to help him, and when he asked for more help, I lost my temper. - -These scenarios boil down to one problem: I have expectations for how things are going to go, and when those expectations are violated, I get angry. - -I've been studying Buddhism for years, so my understanding of this topic is based in Buddhism. I like to think that the Buddha discussed the problem of expectations in his first tech talk when, in his mid-30s, he experienced a breakthrough after years of meditation and convened a small conference to discuss his findings. He had not rented a venue, so he sat under a tree. The attendees were a handful of meditators the Buddha had met during his wanderings in northern India. The Buddha explained that he had discovered four truths: - - * First, that to be alive is to be dissatisfied—to want things to be better than they are now. - * Second, this dissatisfaction is caused by wants; specifically, by our expectation that if we acquire what we want and eliminate what we do not want, it will make us happy for a long time. This expectation is unrealistic: If I get a promotion or if I delete 10 emails, it is temporarily satisfying, but it does not make me happy over the long-term. We are dissatisfied because every material thing quickly disappoints us. - * The third truth is that we can be liberated from this dissatisfaction by accepting our lives as they are. - * The fourth truth is that the way to transform ourselves is to understand our minds and to live a generous and ethical life. - - - -I still get angry at people on the internet. It happened to me recently, when someone posted a comment on [a video I published about Python co-routines][5]. It had taken me months of research and preparation to create this video, and then a newcomer commented, "I want to master python what should I do." - -![Comment on YouTube][6] - -This infuriated me. My first impulse was to be sarcastic, "For starters, maybe you could spell Python with a capital P and end a question with a question mark." Fortunately, I recognized my anger before I acted on it, and closed the tab instead. Sometimes liberation is just a Command+W away. - -### What to do about it - -If you joined a community with the intent to be helpful but on occasion find yourself flying into a rage, I have a method to prevent this. For me, it is the step when I ask myself, "Am I angry?" Knowing is most of the battle. Online, however, we can lose track of our emotions. It is well-established that one reason we are cruel on the internet is because, without seeing or hearing the other person, our natural empathy is not activated. But the other problem with the internet is that, when we use computers, we lose awareness of our bodies. I can be angry and type a sarcastic message without even knowing I am angry. I do not feel my heart pound and my neck grow tense. So, the most important step is to ask myself, "How do I feel?" - -If I am too angry to answer, I can usually walk away. As [Thumper learned in Bambi][7], "If you can't say something nice, don't say nothing at all." - -### The reward - -Helping a newcomer is its own reward, whether you receive thanks or not. But it does not hurt to treat yourself to a glass of whiskey or a chocolate, or just a sigh of satisfaction after your good deed. - -But besides our personal rewards, the payoff for the Python community is immense. We keep the line surging up and to the right. Python continues growing, and that makes our own skills more valuable. We welcome new members, people who might not be sure they belong with us, by reassuring them that there is no such thing as a stupid question. We use Python to create an inclusive and diverse community around writing code. And besides, it simply feels good to be part of a community where people treat each other with respect. It is the kind of community that I want to be a member of. - -### The three-breath vow - -There is one idea I hope you remember from this article: To control our behavior online, we must occasionally pause and notice our feelings. I invite you, if you so choose, to repeat the following vow out loud: - -> I vow -> to take three breaths -> before I answer a question online. - -This article is based on a talk, [Why Generosity Turns To Rage, and What To Do About It][8], that Jesse gave at PyTennessee in February. For more insight for Python developers, attend [PyCon 2018][9], May 9-17 in Cleveland, Ohio. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/avoid-humiliating-newcomers - -作者:[A. Jesse][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/emptysquare -[1]:https://stackoverflow.blog/2017/09/06/incredible-growth-python/ -[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/projections.png?itok=5QTeJ4oe (Projections for programming language popularity) -[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/abusive-answer-1.jpg?itok=BIWW10Rl (An abusive answer on StackOverflow) -[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/abusive-answer-2.jpg?itok=0L-n7T-k (Another abusive answer on Stack Overflow) -[5]:https://www.youtube.com/watch?v=7sCu4gEjH5I -[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/i-want-to-master-python.png?itok=Y-2u1XwA (Comment on YouTube) -[7]:https://www.youtube.com/watch?v=nGt9jAkWie4 -[8]:https://www.pytennessee.org/schedule/presentation/175/ -[9]:https://us.pycon.org/2018/ diff --git a/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md b/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md deleted file mode 100644 index 8c02ca228b..0000000000 --- a/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md +++ /dev/null @@ -1,96 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: subject: (Easily Fund Open Source Projects With These Platforms) -[#]: via: (https://itsfoss.com/open-source-funding-platforms/) -[#]: author: ([Ambarish Kumar](https://itsfoss.com/author/ambarish/)) -[#]: url: ( ) - -Easily Fund Open Source Projects With These Platforms -====== - -**Brief: We list out some funding platforms you can use to financially support open source projects. ** - -Financial support is one of the many ways to [help Linux and Open Source community][1]. This is why you see “Donate” option on the websites of most open source projects. - -While the big corporations have the necessary funding and resources, most open source projects are developed by individuals in their spare time. However, it does require one’s efforts, time and probably includes some overhead costs too. Monetary supports surely help drive the project development. - -If you would like to support open source projects financially, let me show you some platforms dedicated to open source and/or Linux. - -### Funding platforms for Open Source projects - -![Open Source funding platforms][2] - -Just to clarify, we are not associated with any of the funding platforms mentioned here. - -#### 1\. Liberapay - -[Gratipay][3] was probably the biggest platform for funding open source projects and people associated with the project, which got shut down at the end of the year 2017. However, there’s a fork – Liberapay that works as a recurrent donation platform for the open source projects and the contributors. - -[Liberapay][4] is a non-profit, open source organization that helps in a periodic donation to a project. You can create an account as a contributor and ask the people who would really like to help (usually the consumer of your products) to donate. - -To receive a donation, you will have to create an account on Liberapay, brief what you do and about your project, reasons for asking for the donation and what will be done with the money you receive. - -For someone who would like to donate, they would have to add money to their accounts and set up a period for payment that can be weekly, monthly or yearly to someone. There’s a mail triggered when there is not much left to donate. - -The currency supported are dollars and Euro as of now and you can always put up a badge on Github, your Twitter profile or website for a donation. - -#### 2\. Bountysource - -[Bountysource][5] is a funding platform for open source software that has a unique way of paying a developer for his time and work int he name of Bounties. - -There are basically two campaigns, bounties and salt campaign. - -Under the Bounties, users declare bounties aka cash prizes on open issues that they believe should be fixed or any new features which they want to see in the software they are using. A developer can then go and fix it to receive the cash prize. - -Salt Campaign is like any other funding, anyone can pay a recurring amount to a project or an individual working for an open source project for as long as they want. - -Bountysource accepts any software that is approved by Free Software Foundation or Open Source Initiatives. The bounties can be placed using PayPal, Bitcoin or the bounty itself if owned previously. Bountysource supports a no. of issue tracker currently like GitHub, Bugzilla, Google Code, Jira, Launchpad etc. - -#### 3\. Open Collective - -[Open Collective][6] is another popular funding initiative where a person who is willing to receive the donation for the work he is doing in Open Source world can create a page. He can submit the expense reports for the project he is working on. A contributor can add money to his account and pay him for his expenses. - -The complete process is transparent and everyone can track whoever is associated with Open Collective. The contributions are visible along with the unpaid expenses. There is also the option to contribute on a recurring basis. - -Open Collective currently has more than 500 collectives being backed up by more than 5000 users. - -The fact that it is transparent and you know what you are contributing to, drives more accountability. Some common example of collective include hosting costs, community maintenance, travel expenses etc. - -Though Open Collective keeps 10% of all the transactions, it is still a nice way to get your expenses covered in the process of contributing towards an open source project. - -#### 4\. Open Source Grants - -[Open Source Grants][7] is still in its beta stage and has not matured yet. They are looking for projects that do not have any stable funding and adds value to open source community. Most open source projects are run by a small community in a free time and they are trying to fund them so that the developers can work full time on the projects. - -They are equally searching for companies that want to help open source enthusiasts. The process of submitting a project is still being worked upon, and hopefully, in coming days we will see a working way of funding. - -### Final Words - -In the end, I would also like to mention [Patreon][8]. This funding platform is not exclusive to open source but is focused on creators of all kinds. Some projects like [elementary OS have created their accounts on Patreon][9] so that you can support the project on a recurring basis. - -Think Free Speech, not Free Beer. Your small contribution to a project can help it sustain in the long run. For the developers, the above platform can provide a good way to cover up their expenses. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/open-source-funding-platforms/ - -作者:[Ambarish Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ambarish/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/help-linux-grow/ -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/Fund-Open-Source-projects.png?resize=800%2C450&ssl=1 -[3]: https://itsfoss.com/gratipay-open-source/ -[4]: https://liberapay.com/ -[5]: https://www.bountysource.com/ -[6]: https://opencollective.com/ -[7]: https://foundation.travis-ci.org/grants/ -[8]: https://www.patreon.com/ -[9]: https://www.patreon.com/elementary diff --git a/sources/talk/20180321 8 tips for better agile retrospective meetings.md b/sources/talk/20180321 8 tips for better agile retrospective meetings.md deleted file mode 100644 index ec45bf17f0..0000000000 --- a/sources/talk/20180321 8 tips for better agile retrospective meetings.md +++ /dev/null @@ -1,66 +0,0 @@ -8 tips for better agile retrospective meetings -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_meeting.png?itok=4_CivQgp) -I’ve often thought that retrospectives should be called prospectives, as that term concerns the future rather than focusing on the past. The retro itself is truly future-looking: It’s the space where we can ask the question, “With what we know now, what’s the next experiment we need to try for improving our lives, and the lives of our customers?” - -### What’s a retro supposed to look like? - -There are two significant loops in product development: One produces the desired potentially shippable nugget. The other is where we examine how we’re working—not only to avoid doing what didn’t work so well, but also to determine how we can amplify the stuff we do well—and devise an experiment to pull into the next production loop to improve how our team is delighting our customers. This is the loop on the right side of this diagram: - - -![Retrospective 1][2] - -### When retros implode - -While attending various teams' iteration retrospective meetings, I saw a common thread of malcontent associated with a relentless focus on continuous improvement. - -One of the engineers put it bluntly: “[Our] continuous improvement feels like we are constantly failing.” - -The teams talked about what worked, restated the stuff that didn’t work (perhaps already feeling like they were constantly failing), nodded to one another, and gave long sighs. Then one of the engineers (already late for another meeting) finally summed up the meeting: “Ok, let’s try not to submit all of the code on the last day of the sprint.” There was no opportunity to amplify the good, as the good was not discussed. - -In effect, here’s what the retrospective felt like: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_2.jpg?itok=HrDkppCG) - -The anti-pattern is where retrospectives become dreaded sessions where we look back at the last iteration, make two columns—what worked and what didn’t work—and quickly come to some solution for the next iteration. There is no [scientific method][3] involved. There is no data gathering and research, no hypothesis, and very little deep thought. The result? You don’t get an experiment or a potential improvement to pull into the next iteration. - -### 8 tips for better retrospectives - - 1. Amplify the good! Instead of focusing on what didn’t work well, why not begin the retro by having everyone mention one positive item first? - 2. Don’t jump to a solution. Thinking about a problem deeply instead of trying to solve it right away might be a better option. - 3. If the retrospective doesn’t make you feel excited about an experiment, maybe you shouldn’t try it in the next iteration. - 4. If you’re not analyzing how to improve, ([5 Whys][4], [force-field analysis][5], [impact mapping][6], or [fish-boning][7]), you might be jumping to solutions too quickly. - 5. Vary your methods. If every time you do a retrospective you ask, “What worked, what didn’t work?” and then vote on the top item from either column, your team will quickly get bored. [Retromat][8] is a great free retrospective tool to help vary your methods. - 6. End each retrospective by asking for feedback on the retro itself. This might seem a bit meta, but it works: Continually improving the retrospective is recursively improving as a team. - 7. Remove the impediments. Ask how you are enabling the team's search for improvement, and be prepared to act on any feedback. - 8. There are no "iteration police." Take breaks as needed. Deriving hypotheses from analysis and coming up with experiments involves creativity, and it can be taxing. Every once in a while, go out as a team and enjoy a nice retrospective lunch. - - - -This article was inspired by [Retrospective anti-pattern: continuous improvement should not feel like constantly failing][9], posted at [Podojo.com][10]. - -**[See our related story,[How to build a business case for DevOps transformation][11].]** - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/tips-better-agile-retrospective-meetings - -作者:[Catherine Louis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/catherinelouis -[1]:/file/389021 -[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_1.jpg?itok=bggmHN1Q (Retrospective 1) -[3]:https://en.wikipedia.org/wiki/Scientific_method -[4]:https://en.wikipedia.org/wiki/5_Whys -[5]:https://en.wikipedia.org/wiki/Force-field_analysis -[6]:https://opensource.com/open-organization/17/6/experiment-impact-mapping -[7]:https://en.wikipedia.org/wiki/Ishikawa_diagram -[8]:https://plans-for-retrospectives.com/en/?id=28 -[9]:http://www.podojo.com/retrospective-anti-pattern-continuous-improvement-should-not-feel-like-constantly-failing/ -[10]:http://www.podojo.com/ -[11]:https://opensource.com/article/18/2/how-build-business-case-devops-transformation diff --git a/sources/talk/20180323 7 steps to DevOps hiring success.md b/sources/talk/20180323 7 steps to DevOps hiring success.md deleted file mode 100644 index cdea0c65ac..0000000000 --- a/sources/talk/20180323 7 steps to DevOps hiring success.md +++ /dev/null @@ -1,56 +0,0 @@ -7 steps to DevOps hiring success -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6) -As many of us in the DevOps scene know, most companies are hiring, or, at least, trying to do so. The required skills and job descriptions can change entirely from company to company. As a broad overview, most teams are looking for a candidate from either an operations and infrastructure background or someone from a software engineering and development background, then combined with key skills relating to continuous integration, configuration management, continuous delivery/deployment, and cloud infrastructure. Currently in high-demand is knowledge of container orchestration. - -In the ideal world, the two backgrounds will meet somewhere in the middle to form Dev and Ops, but in most cases, there is a lean toward one side or the other while maintaining sufficient skills to understand the needs and demands of their counterparts to work collaboratively and achieve the end goal of continuous delivery/deployment. Every company is different and there isn’t necessarily a right or wrong here. It all depends on your infrastructure, tech stack, other team members’ skills, and the individual goals you hope to achieve by hiring this individual. - -### Focus your hiring - -Now, given the various routes to becoming a DevOps practitioner, how do hiring managers focus their search and selection process to ensure that they’re hitting the mark? - -#### Decide on the background - -Assess the strengths of your existing team. Do you already have some amazing software engineers but you’re lacking the infrastructure knowledge? Aim to close these gaps in skills. You may have been given the budget to hire for DevOps, but you don’t have to spend weeks/months searching for the best software engineer who happens to use Docker and Kubernetes because they are the current hot trends in this space. Find the person who will provide the most value in your environment and go from there. - -#### Contractor or permanent employee? - -Many hiring managers will automatically start searching for a full-time permanent employee when their needs may suggest that they have other options. Sometimes a contractor is your best bet or maybe contract-hire. If you’re aiming to design, implement and build a new DevOps environment, why not find a senior person who has done this a number of times already? Try hiring a senior contractor and bring on a junior full-time hire in parallel; this way, you’ll be able to retain the external contractor knowledge by having them work alongside the junior hire. Contractors can be expensive, but the knowledge they bring can be invaluable, especially if the work can be completed over a shorter time frame. Again, this is just another point of view and you might be best off with a full-time hire to grow the team. - -#### CTRL F is not the solution - -Focus on their understanding of DevOps and CI/CD-related processes over specific tools. I believe the best approach is to focus on finding someone who understands the methodologies over the tools. Does your candidate understand the concept of continuous integration or the concept of continuous delivery? That’s more important than asking whether your candidate uses Jenkins versus Bamboo versus TeamCity and so on. Try not to get caught up in the exact tool chain. The focus should be on the candidates’ ability to solve problems. Are they obsessed with increasing efficiency, saving time, automating manual processes and constantly searching for flaws in the system? They might be the person you were looking for, but you missed them because you didn’t see the word "Puppet" on the resume. - -#### Work closely with your internal talent acquisition team and/or an external recruiter - -Be clear and precise with what you’re looking for and have an ongoing, open communication with recruiters. They can and will help you if used effectively. The job of these recruiters is to save you time by sourcing candidates while you’re focusing on your day-to-day role. Work closely with them and deliver in the same way that you would expect them to deliver for you. If you say you will review a candidate by X time, do it. If they say they’ll have a candidate in your inbox by Y time, make sure they do it, too. Start by setting up an initial call to talk through your requirement, lay out a timeline in which you expect candidates by a specific time, and explain your process in terms of when you will interview, how many interview rounds, and how soon after you will be able to make a final decision on whether to offer or reject the candidates. If you can get this relationship working well, you’ll save lots of time. And make sure your internal teams are focused on supporting your process, not blocking it. - -#### $$$ - -Decide how much you want to pay. It’s not all about the money, but you can waste a lot of your and other people’s time if you don’t lock down the ballpark salary or hourly rate that you can afford. If your budget doesn’t stretch as far as your competitors’, you need to consider what else can help sell the opportunity. Flexible working hours and remote working options are some great ways to do this. Most companies have snacks, beer, and cool offices nowadays, so focus on the real value such as the innovative work your team is doing and how awesome your game-changing product might be. - -#### Drop the ego - -You may have an amazing company and/or product, but you also have some hot competition. Everyone is hiring in this space and candidates have a lot of the buying power. It is no longer as simple as saying, "We are hiring" and the awesome candidates come flowing in. You need to sell your opportunities. Maintaining a reputation as a great place to work is also important. A poor hiring process, such as interviewing without giving feedback, can contribute to bad rumors being spread across the industry. It only takes a few minutes to leave a sour review on Glassdoor. - -#### A smooth process is a successful One - -"Let’s get every single person within the company to do a one-hour interview with the new DevOps person we are hiring!" No, let’s not do that. Two or three stages should be sufficient. You have managers and directors for a reason. Trust your instinct and use your experience to make decisions on who will fit into your organization. Some of the most successful companies can do one phone screen followed by an in-person meeting. During the in-person interview, spend a morning or afternoon allowing the candidate to meet the relevant leaders and senior members of their direct team, then take them for lunch, dinner, or drinks where you can see how they are on a social level. If you can’t have a simple conversation with them, then you probably won’t enjoy working with them. If the thumbs are up, make the hire and don’t wait around. A good candidate will usually have numerous offers on the table at the same time. - -If all goes well, you should be inviting your shiny new employee or contractor into the office in the next few weeks and hopefully many more throughout the year. - -This article was originally published on [DevOps.com][1] and republished with author permission. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/7-steps-devops-hiring-success - -作者:[Conor Delanbanque][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/cdelanbanque -[1]:https://devops.com/7-steps-devops-hiring-success/ diff --git a/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md b/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md deleted file mode 100644 index bac0819e74..0000000000 --- a/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md +++ /dev/null @@ -1,81 +0,0 @@ -Meet OpenAuto, an Android Auto emulator for Raspberry Pi -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb_computer_person_general_.png?itok=BRGJXU7e) - -In 2015, Google introduced [Android Auto][1], a system that allows users to project certain apps from their Android smartphones onto a car's infotainment display. Android Auto's driver-friendly interface, with larger touchscreen buttons and voice commands, aims to make it easier and safer for drivers to control navigation, music, podcasts, radio, phone calls, and more while keeping their eyes on the road. Android Auto can also run as an app on an Android smartphone, enabling owners of older-model vehicles without modern head unit displays to take advantage of these features. - -While there are many [apps][2] available for Android Auto, developers are working to add to its catalog. A new, open source tool named [OpenAuto][3] is hoping to make that easier by giving developers a way to emulate Android Auto on a Raspberry Pi. With OpenAuto, developers can test their applications in conditions similar to how they'll work on an actual car head unit. - -OpenAuto's creator, Michal Szwaj, answered some questions about his project for Opensource.com. Some responses have been edited for conciseness and clarity. - -### What is OpenAuto? - -In a nutshell, OpenAuto is an emulator for the Android Auto head unit. It emulates the head unit software and allows you to use Android Auto on your PC or on any other embedded platform like Raspberry Pi 3. - -Head unit software is a frontend for the Android Auto projection. All magic related to the Android Auto, like navigation, Google Voice Assistant, or music playback, is done on the Android device. Projection of Android Auto on the head unit is accomplished using the [H.264][4] codec for video and [PCM][5] codec for audio streaming. This is what the head unit software mostly does—it decodes the H.264 video stream and PCM audio streams and plays them back together. Another function of the head unit is providing user inputs. OpenAuto supports both touch events and hard keys. - -### What platforms does OpenAuto run on? - -My target platform for deployment of the OpenAuto is Raspberry Pi 3 computer. For successful deployment, I needed to implement support of video hardware acceleration using the Raspberry Pi 3 GPU (VideoCore 4). Thanks to this, Android Auto projection on the Raspberry Pi 3 computer can be handled even using 1080p@60 fps resolution. I used [OpenMAX IL][6] and IL client libraries delivered together with the Raspberry Pi firmware to implement video hardware acceleration. - -Taking advantage of the fact that the Raspberry Pi operating system is Raspbian based on Debian Linux, OpenAuto can be also built for any other Linux-based platform that provides support for hardware video decoding. Most of the Linux-based platforms provide support for hardware video decoding directly in GStreamer. Thanks to highly portable libraries like Boost and [Qt][7], OpenAuto can be built and run on the Windows platform. Support of MacOS is being implemented by the community and should be available soon. - -![][https://www.youtube.com/embed/k9tKRqIkQs8?origin=https://opensource.com&enablejsapi=1] - -### What software libraries does the project use? - -The core of the OpenAuto is the [aasdk][8] library, which provides support for all Android Auto features. aasdk library is built on top of the Boost, libusb, and OpenSSL libraries. [libusb][9] implements communication between the head unit and an Android device (via USB bus). [Boost][10] provides support for the asynchronous mechanisms for communication. It is required for high efficiency and scalability of the head unit software. [OpenSSL][11] is used for encrypting communication. - -The aasdk library is designed to be fully reusable for any purposes related to implementation of the head unit software. You can use it to build your own head unit software for your desired platform. - -Another very important library used in OpenAuto is Qt. It provides support for OpenAuto's multimedia, user input, and graphical interface. And the build system OpenAuto is using is [CMake][12]. - -Note: The Android Auto protocol is taken from another great Android Auto head unit project called [HeadUnit][13]. The people working on this project did an amazing job in reverse engineering the AndroidAuto protocol and creating the protocol buffers that structurize all messages. - -### What equipment do you need to run OpenAuto on Raspberry Pi? - -In addition to a Raspberry Pi 3 computer and an Android device, you need: - - * **USB sound card:** The Raspberry Pi 3 doesn't have a microphone input, which is required to use Google Voice Assistant - * **Video output device:** You can use either a touchscreen or any other video output device connected to HDMI or composite output (RCA) - * **Input device:** For example, a touchscreen or a USB keyboard - - - -### What else do you need to get started? - -In order to use OpenAuto, you must build it first. On the OpenAuto's wiki page you can find [detailed instructions][14] for how to build it for the Raspberry Pi 3 platform. On other Linux-based platforms, the build process will look very similar. - -On the wiki page you can also find other useful instructions, such as how to configure the Bluetooth Hands-Free Profile (HFP) and Advanced Audio Distribution Profile (A2DP) and PulseAudio. - -### What else should we know about OpenAuto? - -OpenAuto allows anyone to create a head unit based on the Raspberry Pi 3 hardware. Nevertheless, you should always be careful about safety and keep in mind that OpenAuto is just an emulator. It was not certified by any authority and was not tested in a driving environment, so using it in a car is not recommended. - -OpenAuto is licensed under GPLv3. For more information, visit the [project's GitHub page][3], where you can find its source code and other information. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/openauto-emulator-Raspberry-Pi - -作者:[Michal Szwaj][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/michalszwaj -[1]:https://www.android.com/auto/faq/ -[2]:https://play.google.com/store/apps/collection/promotion_3001303_android_auto_all -[3]:https://github.com/f1xpl/openauto -[4]:https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC -[5]:https://en.wikipedia.org/wiki/Pulse-code_modulation -[6]:https://www.khronos.org/openmaxil -[7]:https://www.qt.io/ -[8]:https://github.com/f1xpl/aasdk -[9]:http://libusb.info/ -[10]:http://www.boost.org/ -[11]:https://www.openssl.org/ -[12]:https://cmake.org/ -[13]:https://github.com/gartnera/headunit -[14]:https://github.com/f1xpl/ diff --git a/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md b/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md deleted file mode 100644 index b128be62f0..0000000000 --- a/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md +++ /dev/null @@ -1,87 +0,0 @@ -3 pitfalls everyone should avoid with hybrid multicloud -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q) - -This article was co-written with [Roel Hodzelmans][1]. - -We're all told the cloud is the way to ensure a digital future for our businesses. But which cloud? From cloud to hybrid cloud to hybrid multi-cloud, you need to make choices, and these choices don't preclude the daily work of enhancing your customers' experience or agile delivery of the applications they need. - -This article is the first in a four-part series on avoiding pitfalls in hybrid multi-cloud computing. Let's start by examining multi-cloud, hybrid cloud, and hybrid multi-cloud and what makes them different from one another. - -### Hybrid vs. multi-cloud - -There are many conversations you may be having in your business around moving to the cloud. For example, you may want to take your on-premises computing capacity and turn it into your own private cloud. You may wish to provide developers with a cloud-like experience using the same resources you already have. A more traditional reason for expansion is to use external computing resources to augment those in your own data centers. The latter leads you to the various public cloud providers, as well as to our first definition, multi-cloud. - -#### Multi-cloud - -Multi-cloud means using multiple clouds from multiple providers for multiple tasks. - -![Multi-cloud][3] - -Figure 1. Multi-cloud IT with multiple isolated cloud environments - -Typically, multi-cloud refers to the use of several different public clouds in order to achieve greater flexibility, lower costs, avoid vendor lock-in, or use specific regional cloud providers. - -A challenge of the multi-cloud approach is achieving consistent policies, compliance, and management with different providers involved. - -Multi-cloud is mainly a strategy to expand your business while leveraging multi-vendor cloud solutions and spreading the risk of lock-in. Figure 1 shows the isolated nature of cloud services in this model, without any sort of coordination between the services and business applications. Each is managed separately, and applications are isolated to services found in their environments. - -#### Hybrid cloud - -Hybrid cloud solves issues where isolation and coordination are central to the solution. It is a combination of one or more public and private clouds with at least a degree of workload portability, integration, orchestration, and unified management. - -![Hybrid cloud][5] - -Figure 2. Hybrid clouds may be on or off premises, but must have a degree of interoperability - -The key issue here is that there is an element of interoperability, migration potential, and a connection between tasks running in public clouds and on-premises infrastructure, even if it's not always seamless or otherwise fully implemented. - -If your cloud model is missing portability, integration, orchestration, and management, then it's just a bunch of clouds, not a hybrid cloud. - -The cloud environments in Fig. 2 include at least one private and public cloud. They can be off or on premises, but they have some degree of the following: - - * Interoperability - * Application portability - * Data portability - * Common management - - - -As you can probably guess, combining multi-cloud and hybrid cloud results in a hybrid multi-cloud. But what does that look like? - -### Hybrid multi-cloud - -Hybrid multi-cloud pulls together multiple clouds and provides the tools to ensure interoperability between the various services in hybrid and multi-cloud solutions. - -![Hybrid multi-cloud][7] - -Figure 3. Hybrid multi-cloud solutions using open technologies - -Bringing these together can be a serious challenge, but the result ensures better use of resources without isolation in their respective clouds. - -Fig. 3 shows an example of hybrid multi-cloud based on open technologies for interoperability, workload portability, and management. - -### Moving forward: Pitfalls of hybrid multi-cloud - -In part two of this series, we'll look at the first of three pitfalls to avoid with hybrid multi-cloud. Namely, why cost is not always the obvious motivator when determining how to transition your business to the cloud. - -This article is based on "[3 pitfalls everyone should avoid with hybrid multi-cloud][8]," a talk the authors will be giving at [Red Hat Summit 2018][9], which will be held May 8-10 in San Francisco. [Register by May 7][9] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud - -作者:[Eric D.Schabell][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/eschabell -[1]:https://opensource.com/users/roelh -[3]:https://opensource.com/sites/default/files/u128651/multi-cloud.png (Multi-cloud) -[5]:https://opensource.com/sites/default/files/u128651/hybrid-cloud.png (Hybrid cloud) -[7]:https://opensource.com/sites/default/files/u128651/hybrid-multicloud.png (Hybrid multi-cloud) -[8]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=153892 -[9]:https://www.redhat.com/en/summit/2018 diff --git a/sources/talk/20180404 Is the term DevSecOps necessary.md b/sources/talk/20180404 Is the term DevSecOps necessary.md deleted file mode 100644 index 96b544e7c4..0000000000 --- a/sources/talk/20180404 Is the term DevSecOps necessary.md +++ /dev/null @@ -1,51 +0,0 @@ -Is the term DevSecOps necessary? -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2) -First came the term "DevOps." - -It has many different aspects. For some, [DevOps][1] is mostly about a culture valuing collaboration, openness, and transparency. Others focus more on key practices and principles such as automating everything, constantly iterating, and instrumenting heavily. And while DevOps isn’t about specific tools, certain platforms and tooling make it a more practical proposition. Think containers and associated open source cloud-native technologies like [Kubernetes][2] and CI/CD pipeline tools like [Jenkins][3]—as well as native Linux capabilities. - -However, one of the earliest articulated concepts around DevOps was the breaking down of the “wall of confusion” specifically between developers and operations teams. This was rooted in the idea that developers didn’t think much about operational concerns and operators didn’t think much about application development. Add the fact that developers want to move quickly and operators care more about (and tend to be measured on) stability than speed, and it’s easy to see why it was difficult to get the two groups on the same page. Hence, DevOps came to symbolize developers and operators working more closely together, or even merging roles to some degree. - -Of course, calls for improved communications and better-integrated workflows were never just about dev and ops. Business owners should be part of conversations as well. And there are the actual users of the software. Indeed, you can write up an almost arbitrarily long list of stakeholders concerned with the functionality, cost, reliability, and other aspects of software and its associated infrastructure. Which raises the question that many have asked: “What’s so special about security that we need a DevSecOps term?” - -I’m glad you asked. - -The first is simply that it serves as a useful reminder. If developers and operations were historically two of the most common silos in IT organizations, security was (and often still is) another. Security people are often thought of as conservative gatekeepers for whom “no” often seems the safest response to new software releases and technologies. Security’s job is to protect the company, even if that means putting the brakes on a speedy development process. - -Many aspects of traditional security, and even its vocabulary, can also seem arcane to non-specialists. This has also contributed to the notion that security is something apart from mainstream IT. I often share the following anecdote: A year or two ago I was leading a security discussion at a [DevOpsDays][4] event in London in which we were talking about traditional security roles. One of the participants raised his hand and admitted that he was one of those security gatekeepers. He went on to say that this was the first time in his career that he had ever been to a conference that wasn’t a traditional security conference like RSA. (He also noted that he was going to broaden both his and his team’s horizons more.) - -So DevSecOps perhaps shouldn’t be a needed term. But explicitly calling it out seems like a good practice at a time when software security threats are escalating. - -The second reason is that the widespread introduction of cloud-native technologies, particularly those built around containers, are closely tied to DevOps practices. These new technologies are both leading to and enabling greater scale and more dynamic infrastructures. Static security policies and checklists no longer suffice. Security must become a continuous activity. And it must be considered at every stage of your application and infrastructure lifecycle. - -**Here are a few examples:** - -You need to secure the pipeline and applications. You need to use trusted sources for content so that you know who has signed off on container images and that they’re up-to-date with the most recent patches. Your continuous integration system must integrate automated security testing. You’ll sometimes hear people talking about “shifting security left,” which means earlier in the process so that problems can be dealt with sooner. But it’s actually better to think about embedding security throughout the entire pipeline at each step of the testing, integration, deployment, and ongoing management process. - -You need to secure the underlying infrastructure. This means securing the host Linux kernel from container escapes and securing containers from each other. It means using a container orchestration platform with integrated security features. It means defending the network by using network namespaces to isolate applications from other applications within a cluster and isolate environments (such as dev, test, and production) from each other. - -And it means taking advantage of the broader security ecosystem such as container content scanners and vulnerability management tools. - -In short, it’s DevSecOps because modern application development and container platforms require a new type of Dev and a new type of Ops. But they also require a new type of Sec. Thus, DevSecOps. - -**[See our related story,[Security and the SRE: How chaos engineering can play a key role][5].]** - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/devsecops - -作者:[Gordon Haff][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ghaff -[1]:https://opensource.com/resources/devops -[2]:https://kubernetes.io/ -[3]:https://jenkins.io/ -[4]:https://www.devopsdays.org/ -[5]:https://opensource.com/article/18/3/through-looking-glass-security-sre diff --git a/sources/talk/20180405 Rethinking -ownership- across the organization.md b/sources/talk/20180405 Rethinking -ownership- across the organization.md deleted file mode 100644 index d41a3a86dc..0000000000 --- a/sources/talk/20180405 Rethinking -ownership- across the organization.md +++ /dev/null @@ -1,125 +0,0 @@ -Rethinking "ownership" across the organization -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chain.png?itok=sgAjswFf) -Differences in organizational design don't necessarily make some organizations better than others—just better suited to different purposes. Any style of organization must account for its models of ownership (the way tasks get delegated, assumed, executed) and responsibility (the way accountability for those tasks gets distributed and enforced). Conventional organizations and open organizations treat these issues differently, however, and those difference can be jarring for anyone hopping transitioning from one organizational model to another. But transitions are ripe for stumbling over—oops, I mean, learning from. - -Let's do that. - -### Ownership explained - -In most organizations (and according to typical project management standards), work on projects proceeds in five phases: - - * Initiation: Assess project feasibility, identify deliverables and stakeholders, assess benefits - * Planning (Design): Craft project requirements, scope, and schedule; develop communication and quality plans - * Executing: Manage task execution, implement plans, maintain stakeholder relationships - * Monitoring/Controlling: Manage project performance, risk, and quality of deliverables - * Closing: Sign-off on completion requirements, release resources - - - -The list above is not exhaustive, but I'd like to add one phase that is often overlooked: the "Adoption" phase, frequently needed for strategic projects where a change to the culture or organization is required for "closing" or completion. - - * Adoption: Socializing the work of the project; providing communication, training, or integration into processes and standard workflows. - - - -Examining project phases is one way contrast the expression of ownership and responsibility in organizations. - -### Two models, contrasted - -In my experience, "ownership" in a traditional software organization works like this. - -A manager or senior technical associate initiates a project with senior stakeholders and, with the authority to champion and guide the project, they bestow the project on an associate at some point during the planning and execution stages. Frequently, but not always, the groundwork or fundamental design of the work has already been defined and approved—sometimes even partially solved. Employees are expected to see the project through execution and monitoring to completion. - -Employees cut their teeth on a "starter project," where they prove their abilities to a management chain (for example, I recall several such starter projects that were already defined by a manager and architect, and I was assigned to help implement them). Employees doing a good job on a project for which they're responsible get rewarded with additional opportunities, like a coveted assignment, a new project, or increased responsibility. - -An associate acting as "owner" of work is responsible and accountable for that work (if someone, somewhere, doesn't do their job, then the responsible employee either does the necessary work herself or alerts a manager to the problem.) A sense of ownership begins to feel stable over time: Employees generally work on the same projects, and in the same areas for an extended period. For some employees, it means the development of deep expertise. That's because the social network has tighter integration between people and the work they do, so moving around and changing roles and projects is rather difficult. - -This process works differently in an open organization. - -Associates continually define the parameters of responsibility and ownership in an open organization—typically in light of their interests and passions. Associates have more agency to perform all the stages of the project themselves, rather than have pre-defined projects assigned to them. This places additional emphasis on leadership skills in an open organization, because the process is less about one group of people making decisions for others, and more about how an associate manages responsibilities and ownership (whether or not they roughly follow the project phases while being inclusive, adaptable, and community-focused, for example). - -Being responsible for all project phases can make ownership feel more risky for associates in an open organization. Proposing a new project, designing it, and leading its implementation takes initiative and courage—especially when none of this is pre-defined by leadership. It's important to get continuous buy-in, which comes with questions, criticisms, and resistance not only from leaders but also from peers. By default, in open organizations this makes associates leaders; they do much the same work that higher-level leaders do in conventional organizations. And incidentally, this is why Jim Whitehurst, in The Open Organization, cautions us about the full power of "transparency" and the trickiness of getting people's real opinions and thoughts whether we like them or not. The risk is not as high in a traditional organization, because in those organizations leaders manage some of it by shielding associates from heady discussions that arise. - -The reward in an Open Organization is more opportunity—offers of new roles, promotions, raises, etc., much like in a conventional organization. Yet in the case of open organizations, associates have developed reputations of excellence based on their own initiatives, rather than on pre-sanctioned opportunities from leadership. - -### Thinking about adoption - -Any discussion of ownership and responsibility involves addressing the issue of buy-in, because owning a project means we are accountable to our sponsors and users—our stakeholders. We need our stakeholders to buy-into our idea and direction, or we need users to adopt an innovation we've created with our stakeholders. Achieving buy-in for ideas and work is important in each type of organization, and it's difficult in both traditional and open systems—but for different reasons. - -Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone. - -Penetrating a traditional organization's closely knit social ties can be difficult, and it takes time. In such "command-and-control" environments, one would think that employees are simply "forced" to do whatever leaders want them to do. In some cases that's true (e.g., a travel reimbursement system). However, with more innovative programs, this may not be the case; the adoption of a program, tool, or process can be difficult to achieve by fiat, just like in an open organization. And yet these organizations tend to reduce redundancies of work and effort, because "ownership" here involves leaders exerting responsibility over clearly defined "domains" (and because those domains don't change frequently, knowing "who's who"—who's in charge, who to contact with a request or inquiry or idea—can be easier). - -Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone. Points of contact and sources of immediate support can be less obvious, and this means achieving ownership of a project or acquiring new responsibility takes more time. And even then someone's idea may never get adopted. A project's owner can change—and the idea of "ownership" itself is more flexible. Ideas that don't get adopted can even be abandoned, leaving a great idea unimplemented or incomplete. Because any associate can "own" an idea in an open organization, these organizations tend to exhibit more redundancy. (Some people immediately think this means "wasted effort," but I think it can augment the implementation and adoption of innovative solutions. By comparing these organizations, we can also see why Jim Whitehurst calls this kind of culture "chaotic" in The Open Organization). - -### Two models of ownership - -In my experience, I've seen very clear differences between conventional and open organizations when it comes to the issues of ownership and responsibility. - -In an traditional organization: - - * I couldn't "own" things as easily - * I felt frustrated, wanting to take initiative and always needing permission - * I could more easily see who was responsible because stakeholder responsibility was more clearly sanctioned and defined - * I could more easily "find" people, because the organizational network was more fixed and stable - * I more clearly saw what needed to happen (because leadership was more involved in telling me). - - - -Over time, I've learned the following about ownership and responsibility in an open organization: - - * People can feel good about what they are doing because the structure rewards behavior that's more self-driven - * Responsibility is less clear, especially in situations where there's no leader - * In cases where open organizations have "shared responsibility," there is the possibility that no one in the group identified with being responsible; often there is lack of role clarity ("who should own this?") - * More people participate - * Someone's leadership skills must be stronger because everyone is "on their own"; you are the leader. - - - -### Making it work - -On the subject of ownership, each type of organization can learn from the other. The important thing to remember here: Don't make changes to one open or conventional value without considering all the values in both organizations. - -Sound confusing? Maybe these tips will help. - -If you're a more conventional organization trying to act more openly: - - * Allow associates to take ownership out of passion or interest that align with the strategic goals of the organization. This enactment of meritocracy can help them build a reputation for excellence and execution. - * But don't be afraid sprinkle in a bit of "high-level perspective" in the spirit of transparency; that is, an associate should clearly communicate plans to their leadership, so the initiative doesn't create irrelevant or unneeded projects. - * Involving an entire community (as when, for example, the associate gathers feedback from multiple stakeholders and user groups) aids buy-in and creates beneficial feedback from the diversity of perspectives, and this helps direct the work. - * Exploring the work with the community [doesn't mean having to come to consensus with thousands of people][1]. Use the [Open Decision Framework][2] to set limits and be transparent about what those limits are so that feedback and participation is organized ad boundaries are understood. - - - -If you're already an open organization, then you should remember: - - * Although associates initiate projects from "the bottom up," leadership needs to be involved to provide guidance, input to the vision, and circulate centralized knowledge about ownership and responsibility creating a synchronicity of engagement that is transparent to the community. - * Ownership creates responsibility, and the definition and degree of these should be something both associates and leaders agree upon, increasing the transparency of expectations and accountability during the project. Don't make this a matter of oversight or babysitting, but rather [a collaboration where both parties give and take][3]—associates initiate, leaders guide; associates own, leaders support. - - - -Leadership education and mentorship, as it pertains to a particular organization, needs to be available to proactive associates, especially since there is often a huge difference between supporting individual contributors and guiding and coordinating a multiplicity of contributions. - -["Owning your own career"][4] can be difficult when "ownership" isn't a concept an organization completely understands. - -[Subscribe to our weekly newsletter][5] to learn more about open organizations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/18/4/rethinking-ownership-across-organization - -作者:[Heidi Hess von Ludewig][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/heidi-hess-von-ludewig -[1]:https://opensource.com/open-organization/17/8/achieving-alignment-in-openorg -[2]:https://opensource.com/open-organization/resources/open-decision-framework -[3]:https://opensource.com/open-organization/17/11/what-is-collaboration -[4]:https://opensource.com/open-organization/17/12/drive-open-career-forward -[5]:https://opensource.com/open-organization/resources/newsletter diff --git a/sources/talk/20180410 Microservices Explained.md b/sources/talk/20180410 Microservices Explained.md deleted file mode 100644 index 1d7e946a12..0000000000 --- a/sources/talk/20180410 Microservices Explained.md +++ /dev/null @@ -1,61 +0,0 @@ -Microservices Explained -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-microservices.jpg?itok=GpoWiDeG) -Microservices is not a new term. Like containers, the concept been around for a while, but it’s become a buzzword recently as many companies embark on their cloud native journey. But, what exactly does the term microservices mean? Who should care about it? In this article, we’ll take a deep dive into the microservices architecture. - -### Evolution of microservices - -Patrick Chanezon, Chief Developer Advocate for Docker provided a brief history lesson during our conversation: In the late 1990s, developers started to structure their applications into monoliths where massive apps hadall features and functionalities baked into them. Monoliths were easy to write and manage. Companies could have a team of developers who built their applications based on customer feedback through sales and marketing teams. The entire developer team would work together to build tightly glued pieces as an app that can be run on their own app servers. It was a popular way of writing and delivering web applications. - -There is a flip side to the monolithic coin. Monoliths slow everything and everyone down. It’s not easy to update one service or feature of the application. The entire app needs to be updated and a new version released. It takes time. There is a direct impact on businesses. Organizations could not respond quickly to keep up with new trends and changing market dynamics. Additionally, scalability was challenging. - -Around 2011, SOA (Service Oriented Architecture) became popular where developers could cram multi-tier web applications as software services inside a VM (virtual machine). It did allow them to add or update services independent of each other. However, scalability still remained a problem. - -“The scale out strategy then was to deploy multiple copies of the virtual machine behind a load balancer. The problems with this model are several. Your services can not scale or be upgraded independently as the VM is your lowest granularity for scale. VMs are bulky as they carry extra weight of an operating system, so you need to be careful about simply deploying multiple copies of VMs for scaling,” said Madhura Maskasky, co-founder and VP of Product at Platform9. - -Some five years ago when Docker hit the scene and containers became popular, SOA faded out in favor of “microservices” architecture. “Containers and microservices fix a lot of these problems. Containers enable deployment of microservices that are focused and independent, as containers are lightweight. The Microservices paradigm, combined with a powerful framework with native support for the paradigm, enables easy deployment of independent services as one or more containers as well as easy scale out and upgrade of these,” said Maskasky. - -### What’s are microservices? - -Basically, a microservice architecture is a way of structuring applications. With the rise of containers, people have started to break monoliths into microservices. “The idea is that you are building your application as a set of loosely coupled services that can be updated and scaled separately under the container infrastructure,” said Chanezon. - -“Microservices seem to have evolved from the more strictly defined service-oriented architecture (SOA), which in turn can be seen as an expression object oriented programming concepts for networked applications. Some would call it just a rebranding of SOA, but the term “microservices” often implies the use of even smaller functional components than SOA, RESTful APIs exchanging JSON, lighter-weight servers (often containerized, and modern web technologies and protocols,” said Troy Topnik, SUSE Senior Product Manager, Cloud Application Platform. - -Microservices provides a way to scale development and delivery of large, complex applications by breaking them down that allows the individual components to evolve independently from each other. - -“Microservices architecture brings more flexibility through the independence of services, enabling organizations to become more agile in how they deliver new business capabilities or respond to changing market conditions. Microservices allows for using the ‘right tool for the right task’, meaning that apps can be developed and delivered by the technology that will be best for the task, rather than being locked into a single technology, runtime or framework,” said Christian Posta, senior principal application platform specialist, Red Hat. - -### Who consumes microservices? - -“The main consumers of microservices architecture patterns are developers and application architects,” said Topnik. As far as admins and DevOps engineers are concerned their role is to build and maintain the infrastructure and processes that support microservices. - -“Developers have been building their applications traditionally using various design patterns for efficient scale out, high availability and lifecycle management of their applications. Microservices done along with the right orchestration framework help simplify their lives by providing a lot of these features out of the box. A well-designed application built using microservices will showcase its benefits to the customers by being easy to scale, upgrade, debug, but without exposing the end customer to complex details of the microservices architecture,” said Maskasky. - -### Who needs microservices? - -Everyone. Microservices is the modern approach to writing and deploying applications more efficiently. If an organization cares about being able to write and deploy its services at a faster rate they should care about it. If you want to stay ahead of your competitors, microservices is the fastest route. Security is another major benefit of the microservices architecture, as this approach allows developers to keep up with security and bug fixes, without having to worry about downtime. - -“Application developers have always known that they should build their applications in a modular and flexible way, but now that enough of them are actually doing this, those that don’t risk being left behind by their competitors,” said Topnik. - -If you are building a new application, you should design it as microservices. You never have to hold up a release if one team is late. New functionalities are available when they're ready, and the overall system never breaks. - -“We see customers using this as an opportunity to also fix other problems around their application deployment -- such as end-to-end security, better observability, deployment and upgrade issues,” said Maskasky. - -Failing to do so means you would be stuck in the traditional stack, which means microservices won’t be able to add any value to it. If you are building new applications, microservices is the way to go. - -Learn more about cloud-native at [KubeCon + CloudNativeCon Europe][1], coming up May 2-4 in Copenhagen, Denmark. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/4/microservices-explained - -作者:[SWAPNIL BHARTIYA][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/arnieswap -[1]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/attend/register/ diff --git a/sources/talk/20180412 Management, from coordination to collaboration.md b/sources/talk/20180412 Management, from coordination to collaboration.md deleted file mode 100644 index 1262f88300..0000000000 --- a/sources/talk/20180412 Management, from coordination to collaboration.md +++ /dev/null @@ -1,71 +0,0 @@ -Management, from coordination to collaboration -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab2.png?itok=uMO9zn5U) - -Any organization is fundamentally a pattern of interactions between people. The nature of those interactions—their quality, their frequency, their outcomes—is the most important product an organization can create. Perhaps counterintuitively, recognizing this fact has never been more important than it is today—a time when digital technologies are reshaping not only how we work but also what we do when we come together. - - -And yet many organizational leaders treat those interactions between people as obstacles or hindrances to avoid or eliminate, rather than as the powerful sources of innovation they really are. - -That's why we're observing that some of the most successful organizations today are those capable of shifting the way they think about the value of the interactions in the workplace. And to do that, they've radically altered their approach to management and leadership. - -### Moving beyond mechanical management - -Simply put, traditionally managed organizations treat unanticipated interactions between stakeholders as potentially destructive forces—and therefore as costs to be mitigated. - -This view has a long, storied history in the field of economics. But it's perhaps nowhere more clear than in the early writing of Nobel Prize-winning economist[Ronald Coase][1]. In 1937, Coase published "[The Nature of the Firm][2]," an essay about the reasons people organized into firms to work on large-scale projects—rather than tackle those projects alone. Coase argued that when the cost of coordinating workers together inside a firm is less than that of similar market transactions outside, people will tend to organize so they can reap the benefits of lower operating costs. - -But at some point, Coase's theory goes, the work of coordinating interactions between so many people inside the firm actually outweighs the benefits of having an organization in the first place. The complexity of those interactions becomes too difficult to handle. Management, then, should serve the function of decreasing this complexity. Its primary goal is coordination, eliminating the costs associated with messy interpersonal interactions that could slow the firm and reduce its efficiency. As one Fortune 100 CEO recently told me, "Failures happen most often around organizational handoffs." - -This makes sense to people practicing what I've called "[mechanical management][3]," where managing people is the act of keeping them focused on specific, repeatable, specialized tasks. Here, management's key function is optimizing coordination costs—ensuring that every specialized component of the finely-tuned organizational machine doesn't impinge on the others and slow them down. Managers work to avoid failures by coordinating different functions across the organization (accounts payable, research and development, engineering, human resources, sales, and so on) to get them to operate toward a common goal. And managers create value by controlling information flows, intervening only when functions become misaligned. - -Today, when so many of these traditionally well-defined tasks have become automated, value creation is much more a result of novel innovation and problem solving—not finding new ways to drive efficiency from repeatable processes. But numerous studies demonstrate that innovative, problem-solving activity occurs much more regularly when people work in cross-functional teams—not as isolated individuals or groups constrained by single-functional silos. This kind of activity can lead to what some call "accidental integration": the serendipitous innovation that occurs when old elements combine in new and unforeseen ways. - -That's why working collaboratively has now become a necessity that managers need to foster, not eliminate. - -### From coordination to collaboration - -Reframing the value of the firm—from something that coordinated individual transactions to something that produces novel innovations—means rethinking the value of the relations at the core of our organizations. And that begins with reimagining the task of management, which is no longer concerned primarily with minimizing coordination costs but maximizing cooperation opportunities. - -Too few of our tried-and-true management practices have this goal. If they're seeking greater innovation, managers need to encourage more interactions between people in different functional areas, not fewer. A cross-functional team may not be as efficient as one composed of people with the same skill sets. But a cross-functional team is more likely to be the one connecting points between elements in your organization that no one had ever thought to connect (the one more likely, in other words, to achieve accidental integration). - -Working collaboratively has now become a necessity that managers need to foster, not eliminate. - -I have three suggestions for leaders interested in making this shift: - -First, define organizations around processes, not functions. We've seen this strategy work in enterprise IT, for example, in the case of [DevOps][4], where teams emerge around end goals (like a mobile application or a website), not singular functions (like developing, testing, and production). In DevOps environments, the same team that writes the code is responsible for maintaining it once it's in production. (We've found that when the same people who write the code are the ones woken up when it fails at 3 a.m., we get better code.) - -Second, define work around the optimal organization rather than the organization around the work. Amazon is a good example of this strategy. Teams usually stick to the "[Two Pizza Rule][5]" when establishing optimal conditions for collaboration. In other words, Amazon leaders have determined that the best-sized team for maximum innovation is about 10 people, or a group they can feed with two pizzas. If the problem gets bigger than that two-pizza team can handle, they split the problem into two simpler problems, dividing the work between multiple teams rather than adding more people to the single team. - -And third, to foster creative behavior and really get people cooperating with one another, do whatever you can to cultivate a culture of honest and direct feedback. Be straightforward and, as I wrote in The Open Organization, let the sparks fly; have frank conversations and let the best ideas win. - -### Let it go - -I realize that asking managers to significantly shift the way they think about their roles can lead to fear and skepticism. Some managers define their performance (and their very identities) by the control they exert over information and people. But the more you dictate the specific ways your organization should do something, the more static and brittle that activity becomes. Agility requires letting go—giving up a certain degree of control. - -Front-line managers will see their roles morph from dictating and monitoring to enabling and supporting. Instead of setting individual-oriented goals, they'll need to set group-oriented goals. Instead of developing individual incentives, they'll need to consider group-oriented incentives. - -Because ultimately, their goal should be to[create the context in which their teams can do their best work][6]. - -[Subscribe to our weekly newsletter][7] to learn more about open organizations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/18/4/management-coordination-collaboration - -作者:[Jim Whitehurst][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/remyd -[1]:https://news.uchicago.edu/article/2013/09/02/ronald-h-coase-founding-scholar-law-and-economics-1910-2013 -[2]:http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.1937.tb00002.x/full -[3]:https://opensource.com/open-organization/18/2/try-learn-modify -[4]:https://enterprisersproject.com/devops -[5]:https://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork -[6]:https://opensource.com/open-organization/16/3/what-it-means-be-open-source-leader -[7]:https://opensource.com/open-organization/resources/newsletter diff --git a/sources/talk/20180416 For project safety back up your people, not just your data.md b/sources/talk/20180416 For project safety back up your people, not just your data.md deleted file mode 100644 index 0dc6d41fa5..0000000000 --- a/sources/talk/20180416 For project safety back up your people, not just your data.md +++ /dev/null @@ -1,79 +0,0 @@ -For project safety back up your people, not just your data -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel) -The [FSF][1] was founded in 1985, Perl in 1987 ([happy 30th birthday, Perl][2]!), and Linux in 1991. The [term open source][3] and the [Open Source Initiative][4] both came into being in 1998 (and [turn 20 years old][5] in 2018). Since then, free and open source software has grown to become the default choice for software development, enabling incredible innovation. - -We, the greater open source community, have come of age. Millions of open source projects exist today, and each year the [GitHub Octoverse][6] reports millions of new public repositories. We rely on these projects every day, and many of us could not operate our services or our businesses without them. - -So what happens when the leaders of these projects move on? How can we help ease those transitions while ensuring that the projects thrive? By teaching and encouraging **succession planning**. - -### What is succession planning? - -Succession planning is a popular topic among business executives, boards of directors, and human resources professionals, but it doesn't often come up with maintainers of free and open source projects. Because the concept is common in business contexts, that's where you'll find most resources and advice about establishing a succession plan. As you might expect, most of these articles aren't directly applicable to FOSS, but they do form a springboard from which we can launch our own ideas about succession planning. - -According to [Wikipedia][7]: - -> Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire, or die. - -In my opinion, this definition doesn't apply very well to free and open source software projects. I primarily object to the use of the term leaders. For the collaborative projects of FOSS, everyone can be some form of leader. Roles other than "project founder" or "benevolent dictator for life" are just as important. Any project role that is measured by bus factor is one that can benefit from succession planning. - -> A project's bus factor is the number of team members who, if hit by a bus, would endanger the smooth operation of the project. The smallest and worst bus factor is 1: when only a single person's loss would put the project in jeopardy. It's a somewhat grim but still very useful concept. - -I propose that instead of viewing succession planning as a leadership pipeline, free and open source projects should view it as a skills pipeline. What sorts of skills does your project need to continue functioning well, and how can you make sure those skills always exist in your community? - -### Benefits of succession planning - -When I talk to project maintainers about succession planning, they often respond with something like, "We've been pretty successful so far without having to think about this. Why should we start now?" - -Aside from the fact that the phrase, "We've always done it this way" is probably one of the most dangerous in the English language, and hearing (or saying) it should send up red flags in any community, succession planning provides plenty of very real benefits: - - * **Continuity** : When someone leaves, what happens to the tasks they were performing? Succession planning helps ensure those tasks continue uninterrupted and no one is left hanging. - * **Avoiding a power vacuum** : When a person leaves a role with no replacement, it can lead to confusion, delays, and often most damaging, political woes. After all, it's much easier to fix delays than hurt feelings. A succession plan helps alleviate the insecure and unstable time when someone in a vital role moves on. - * **Increased project/organization longevity** : The thinking required for succession planning is the same sort of thinking that contributes to project longevity. Ensuring continuity in leadership, culture, and productivity also helps ensure the project will continue. It will evolve, but it will survive. - * **Reduced workload/pressure on current leaders** : When a single team member performs a critical role in the project, they often feel pressure to be constantly "on." This can lead to burnout and worse, resignations. A succession plan ensures that all important individuals have a backup or successor. The knowledge that someone can take over is often enough to reduce the pressure, but it also means that key players can take breaks or vacations without worrying that their role will be neglected in their absence. - * **Talent development** : Members of the FOSS community talk a lot about mentoring these days, and that's great. However, most of the conversation is around mentoring people to contribute code to a project. There are many different ways to contribute to free and open source software projects beyond programming. A robust succession plan recognizes these other forms of contribution and provides mentoring to prepare people to step into critical non-programming roles. - * **Inspiration for new members** : It can be very motivational for new or prospective community members to see that a project uses its succession plan. Not only does it show them that the project is well-organized and considers its own health and welfare as well as that of its members, but it also clearly shows new members how they can grow in the community. An obvious path to critical roles and leadership positions inspires new members to stick around to walk that path. - * **Diversity of thoughts/get out of a rut** : Succession plans provide excellent opportunities to bring in new people and ideas to the critical roles of a project. [Studies show][8] that diverse leadership teams are more effective and the projects they lead are more innovative. Using your project's succession plan to mentor people from different backgrounds and with different perspectives will help strengthen and evolve the project in a healthy way. - * **Enabling meritocracy** : Unfortunately, what often passes for meritocracy in many free and open source projects is thinly veiled hostility toward new contributors and diverse opinions—hostility that's delivered from within an echo chamber. Meritocracy without a mentoring program and healthy governance structure is simply an excuse to practice subjective discrimination while hiding behind unexpressed biases. A well-executed succession plan helps teams reach the goal of a true meritocracy. What counts as merit for any given role, and how to reach that level of merit, are openly, honestly, and completely documented. The entire community will be able to see and judge which members are on the path or deserve to take on a particular critical role. - - - -### Why it doesn't happen - -Succession planning isn't a panacea, and it won't solve all problems for all projects, but as described above, it offers a lot of worthwhile benefits to your project. - -Despite that, very few free and open source projects or organizations put much thought into it. I was curious why that might be, so I asked around. I learned that the reasons for not having a succession plan fall into one of five different buckets: - - * **Too busy** : Many people recognize succession planning (or lack thereof) as a problem for their project but just "hadn't ever gotten around to it" because there's "always something more important to work on." I understand and sympathize with this, but I suspect the problem may have more to do with prioritization than with time availability. - * **Don't think of it** : Some people are so busy and preoccupied that they haven't considered, "Hey, what would happen if Jen had to leave the project?" This never occurs to them. After all, Jen's always been there when they need her, right? And that will always be the case, right? - * **Don't want to think of it** : Succession planning shares a trait with estate planning: It's associated with negative feelings like loss and can make people address their own mortality. Some people are uncomfortable with this and would rather not consider it at all than take the time to make the inevitable easier for those they leave behind. - * **Attitude of current leaders** : A few of the people with whom I spoke didn't want to recognize that they're replaceable, or to consider that they may one day give up their power and influence on the project. While this was (thankfully) not a common response, it was alarming enough to deserve its own bucket. Failure of someone in a critical role to recognize or admit that they won't be around forever can set a project up for failure in the long run. - * **Don't know where to start** : Many people I interviewed realize that succession planning is something that their project should be doing. They were even willing to carve out the time to tackle this very large task. What they lacked was any guidance on how to start the process of creating a succession plan. - - - -As you can imagine, something as important and people-focused as a succession plan isn't easy to create, and it doesn't happen overnight. Also, there are many different ways to do it. Each project has its own needs and critical roles. One size does not fit all where succession plans are concerned. - -There are, however, some guidelines for how every project could proceed with the succession plan creation process. I'll cover these guidelines in my next article. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/passing-baton-succession-planning-foss-leadership - -作者:[VM(Vicky) Brasseur][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/vmbrasseur -[1]:http://www.fsf.org -[2]:https://opensource.com/article/17/10/perl-turns-30 -[3]:https://opensource.com/article/18/2/coining-term-open-source-software -[4]:https://opensource.org -[5]:https://opensource.org/node/910 -[6]:https://octoverse.github.com -[7]:https://en.wikipedia.org/wiki/Succession_planning -[8]:https://hbr.org/2016/11/why-diverse-teams-are-smarter diff --git a/sources/talk/20180417 How to develop the FOSS leaders of the future.md b/sources/talk/20180417 How to develop the FOSS leaders of the future.md deleted file mode 100644 index a65dc9dabd..0000000000 --- a/sources/talk/20180417 How to develop the FOSS leaders of the future.md +++ /dev/null @@ -1,93 +0,0 @@ -How to develop the FOSS leaders of the future -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) -Do you hold a critical role in a free and open source software project? Would you like to make it easier for the next person to step into your shoes, while also giving yourself the freedom to take breaks and avoid burnout? - -Of course you would! But how do you get started? - -Before you do anything, remember that this is a free or open source project. As with all things in FOSS, your succession planning should happen in collaboration with others. The [Principle of Least Astonishment][1] also applies: Don't work on your plan in isolation, then spring it on the entire community. Work together and publicly, so no one is caught off guard when the cultural or governance changes start happening. - -### Identify and analyse critical roles - -As a project leader, your first step is to identify the critical roles in your community. While it can help to ask each community members what role they perform, it's important to realize that most people perform multiple roles. Make sure you consider every role that each community member plays in the project. - -Once you've identified the roles and determined which ones are critical to your project, the next step is to list all of the duties and responsibilities for each of those critical roles. Be very honest here. List the duties and responsibilities you think each role has, then ask the person who performs that role to list the duties the role actually has. You'll almost certainly find that the second list is longer than the first. - -### Refactor large roles - -During this process, have you discovered any roles that encompass a large number of duties and responsibilities? Large roles are like large methods in your code: They're a sign of a problem, and they need to be refactored to make them easier to maintain. One of the easiest and most effective steps in succession planning for FOSS projects is to split up each large role into two or more smaller roles and distribute these to other community members. With that one step, you've greatly improved the [bus factor][2] for your project. Even better, you've made each one of those new, smaller roles much more accessible and less intimidating for new community members. People are much more likely to volunteer for a role if it's not a massive burden. - -### Limit role tenure - -Another way to make a role more enticing is to limit its tenure. Community members will be more willing to step into roles that aren't open-ended. They can look at their life and work plans and ask themselves, "Can I take on this role for the next eighteen months?" (or whatever term limit you set). - -Setting term limits also helps those who are currently performing the role. They know when they can set aside those duties and move on to something else, which can help alleviate burnout. Also, setting a term limit creates a pool of people who have performed the role and are qualified to step in if needed, which can also mitigate burnout. - -### Knowledge transfer - -Once you've identified and defined the critical roles in your project, most of what remains is knowledge transfer. Even small projects involve a lot of moving parts and knowledge that needs to be where everyone can see, share, use, and contribute to it. What sort of knowledge should you be collecting? The answer will vary by project, needs, and role, but here are some of the most common (and commonly overlooked) types of information needed to implement a succession plan: - - * **Roles and their duties** : You've spent a lot of time identifying, analyzing, and potentially refactoring roles and their duties. Make sure this information doesn't get lost. - * **Policies and procedures** : None of those duties occur in a vacuum. Each duty must be performed in a particular way (procedures) when particular conditions are met (policies). Take stock of these details for every duty of every role. - * **Resources** : What accounts are associated with the project, or are necessary for it to operate? Who helps you with meetup space, sponsorship, or in-kind services? Such information is vital to project operation but can be easily lost when the responsible community member moves on. - * **Credentials** : Ideally, every external service required by the project will use a login that goes to an email address designated for a specific role (`sre@project.org`) rather than to a personal address. Every role's address should include multiple people on the distribution list to ensure that important messages (such as downtime or bogus "forgot password" requests) aren't missed. The credentials for every service should be kept in a secure keystore, with access limited to the fewest number of people possible. - * **Project history** : All community members benefit greatly from learning the history of the project. Collecting project history information can clarify why decisions were made in the past, for example, and reveal otherwise unexpressed requirements and values of the community. Project histories can also help new community members understand "inside jokes," jargon, and other cultural factors. - * **Transition plans** : A succession plan doesn't do much good if project leaders haven't thought through how to transition a role from one person to another. How will you locate and prepare people to take over a critical role? Since the project has already done a lot of thinking and knowledge transfer, transition plans for each role may be easier to put together. - - - -Doing a complete knowledge transfer for all roles in a project can be an enormous undertaking, but the effort is worth it. To avoid being overwhelmed by such a daunting task, approach it one role at a time, finishing each one before you move onto the next. Limiting the scope in this way makes both progress and success much more likely. - -### Document, document, document! - -Succession planning takes time. The community will be making a lot of decisions and collecting a lot of information, so make sure nothing gets lost. It's important to document everything (not just in email threads). Where knowledge is concerned, documentation scales and people do not. Include even the things that you think are obvious—what's obvious to a more seasoned community member may be less so to a newbie, so don't skip steps or information. - -Gather these decisions, processes, policies, and other bits of information into a single place, even if it's just a collection of markdown files in the main project repository. The "how" and "where" of the documentation can be sorted out later. It's better to capture key information first and spend time [bike-shedding][3] a documentation system later. - -Once you've collected all of this information, you should understand that it's unlikely that anyone will read it. I know, it seems unfair, but that's just how things usually work out. The reason? There is simply too much documentation and too little time. To address this, add an abstract, or summary, at the top of each item. Often that's all a person needs, and if not, the complete document is there for a deep dive. Recognizing and adapting to how most people use documentation increases the likelihood that they will use yours. - -Above all, don't skip the documentation process. Without documentation, succession plans are impossible. - -### New leaders - -If you don't yet perform a critical role but would like to, you can contribute to the succession planning process while apprenticing your way into one of those roles. - -For starters, actively look for opportunities to learn and contribute. Shadow people in critical roles. You'll learn how the role is done, and you can document it to help with the succession planning process. You'll also get the opportunity to see whether it's a role you're interested in pursuing further. - -Asking for mentorship is a great way to get yourself closer to taking on a critical role in the project. Even if you haven't heard that mentoring is available, it's perfectly OK to ask about it. The people already in those roles are usually happy to mentor others, but often are too busy to think about offering mentorship. Asking is a helpful reminder to them that they should be helping to train people to take over their role when they need a break. - -As you perform your own tasks, actively seek out feedback. This will not only improve your skills, but it shows that you're interested in doing a better job for the community. This commitment will pay off when your project needs people to step into critical roles. - -Finally, as you communicate with more experienced community members, take note of anecdotes about the history of the project and how it operates. This history is very important, especially for new contributors or people stepping into critical roles. It provides the context necessary for new contributors to understand what things do or don't work and why. As you hear these stories, document them so they can be passed on to those who come after you. - -### Succession planning examples - -While too few FOSS projects are actively considering succession planning, some are doing a great job of trying to reduce their bus factor and prevent maintainer burnout. - -[Exercism][4] isn't just an excellent tool for gaining fluency in programming languages. It's also an [open source project][5] that goes out of its way to help contributors [land their first patch][6]. In 2016, the project reviewed the health of each language track and [discovered that many were woefully maintained][7]. There simply weren't enough people covering each language, so maintainers were burning out. The Exercism community recognized the risk this created and pushed to find new maintainers for as many language tracks as possible. As a result, the project was able to revive several tracks from near-death and develop a structure for inviting people to become maintainers. - -The purpose of the [Vox Pupuli][8] project is to serve as a sort of succession plan for the [Puppet module][9] community. When a maintainer no longer wishes or is able to work on their module, they can bequeath it to the Vox Pupuli community. This community of 30 collaborators shares responsibility for maintaining all the modules it accepts into the project. The large number of collaborators ensures that no single person bears the burden of maintenance while also providing a long and fruitful life for every module in the project. - -These are just two examples of how some FOSS projects are tackling succession planning. Share your stories in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/succession-planning-how-develop-foss-leaders-future - -作者:[VM(Vicky) Brasseur)][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/vmbrasseur -[1]:https://en.wikipedia.org/wiki/Principle_of_least_astonishment -[2]:https://en.wikipedia.org/wiki/Bus_factor -[3]:https://en.wikipedia.org/wiki/Law_of_triviality -[4]:http://exercism.io -[5]:https://github.com/exercism/exercism.io -[6]:https://github.com/exercism/exercism.io/blob/master/CONTRIBUTING.md -[7]:https://tinyletter.com/exercism/letters/exercism-track-health-check-new-maintainers -[8]:https://voxpupuli.org -[9]:https://forge.puppet.com diff --git a/sources/talk/20180418 Is DevOps compatible with part-time community teams.md b/sources/talk/20180418 Is DevOps compatible with part-time community teams.md deleted file mode 100644 index e78b96959f..0000000000 --- a/sources/talk/20180418 Is DevOps compatible with part-time community teams.md +++ /dev/null @@ -1,73 +0,0 @@ -Is DevOps compatible with part-time community teams? -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1) -DevOps seems to be the talk of the IT world of late—and for good reason. DevOps has streamlined the process and production of IT development and operations. However, there is also an upfront cost to embracing a DevOps ideology, in terms of time, effort, knowledge, and financial investment. Larger companies may have the bandwidth, budget, and time to make the necessary changes, but is it feasible for part-time, resource-strapped communities? - -Part-time communities are teams of like-minded people who take on projects outside of their normal work schedules. The members of these communities are driven by passion and a shared purpose. For instance, one such community is the [ALM | DevOps Rangers][1]. With 100 rangers engaged across the globe, a DevOps solution may seem daunting; nonetheless, they took on the challenge and embraced the ideology. Through their example, we've learned that DevOps is not only feasible but desirable in smaller teams. To read about their transformation, check out [How DevOps eliminates development bottlenecks][2]. - -> “DevOps is the union of people, process, and products to enable continuous delivery of value to our end customers.” - Donovan Brown - -### The cost of DevOps - -As stated above, there is an upfront "cost" to DevOps. The cost manifests itself in many forms, such as the time and collaboration between development, operations, and other stakeholders, planning a smooth-flowing process that delivers continuous value, finding the best DevOps products, and training the team in new technologies, to name a few. This aligns directly with Donovan's definition of DevOps, in fact—a **process** for delivering **continuous value** and the **people** who make that happen. - -Streamlined DevOps takes a lot of planning and training just to create the process, and that doesn't even consider the testing phase. We also can't forget the existing in-flight projects that need to be converted into the new system. While the cost increases the more pervasive the transformation—for instance, if an organization aims to unify its entire development organization under a single process, then that would cost more versus transforming a single pilot or subset of the entire portfolio—these upfront costs must be addressed regardless of their scale. There are a lot of resources and products already out there that can be implemented for a smoother transition—but again, we face the time and effort that will be necessary just to research which ones might work best. - -In the case of the ALM | DevOps Rangers, they had to halt all projects for a couple of sprints to set up the initial process. Many organizations would not be able to do that. Even part-time groups might have very good reasons to keep things moving, which only adds to the complexity. In such scenarios, additional cutover planning (and therefore additional cost) is needed, and the overall state of the community is one of flux and change, which adds risk, which—you guessed it—requires more cost to mitigate. - -There is also an ongoing "cost" that teams will face with a DevOps mindset: Simple maintenance of the system, training and transitioning new team members, and keeping up with new, improved technologies are all a part of the process. - -### DevOps for a part-time community - -Whereas larger companies can dedicate a single manager or even a team to the task over overseeing the continuous integration and continuous deployment (CI/CD) pipelines, part-time community teams don't have the bandwidth to give. With such a massive undertaking we must ask: Is it even worth it for groups with fewer resources to take on DevOps for their community? Or should they abandon the idea of DevOps altogether? - -The answer to that is dependent on a few variables, such as the ability of the teams to be self-managing, the time and effort each member is willing to put into the transformation, and the dedication of the community to the process. - -### Example: Benefits of DevOps in a part-time community - -Luckily, we aren't without examples to demonstrate just how DevOps can benefit a smaller group. Let's take a quick look at the ALM Rangers again. The results from their transformation help us understand how DevOps changed their community: - -![](https://opensource.com/sites/default/files/images/life-uploads/devops.png) - -As illustrated, there are some huge benefits for part-time community teams. Planning goes from long, arduous design sessions to a quick prototyping and storyboarding process. Builds become automated, reliable, and resilient. Testing and bug detection are proactive instead of reactive, which turns into a happier clientele. Multiple full-time program managers are replaced with self-managing teams with a single part-time manager to oversee projects. Teams become smaller and more efficient, which equates to higher production rates and higher-quality project delivery. With results like these, it's hard to argue against DevOps. - -Still, the upfront and ongoing costs aren't right for every community. The number-one most important aspect of any DevOps transformation is the mindset of the people involved. Adopting the idea of self-managing teams who work in autonomy instead of the traditional chain-of-command scheme can be a challenge for any group. The members must be willing to work independently without a lot of oversight and take ownership of their features and user experience, but at the same time, work in a setting that is fully transparent to the rest of the community. **The success or failure of a DevOps strategy lies on the team.** - -### Making the DevOps transition in 4 steps - -Another important question to ask: How can a low-bandwidth group make such a massive transition? The good news is that a DevOps transformation doesn’t need to happen all at once. Taken in smaller, more manageable steps, organizations of any size can embrace DevOps. - - 1. Determine why DevOps may be the solution you need. Are your projects bottlenecking? Are they running over budget and over time? Of course, these concerns are common for any community, big or small. Answering these questions leads us to step two: - 2. Develop the right framework to improve the engineering process. DevOps is all about automation, collaboration, and streamlining. Rather than trying to fit everyone into the same process box, the framework should support the work habits, preferences, and delivery needs of the community. Some broad standards should be established (for example, that all teams use a particular version control system). Beyond that, however, let the teams decide their own best process. - 3. Use the current products that are already available if they meet your needs. Why reinvent the wheel? - 4. Finally, implement and test the actual DevOps solution. This is, of course, where the actual value of DevOps is realized. There will likely be a few issues and some heartburn, but it will all be worth it in the end because, once established, the products of the community’s work will be nimbler and faster for the users. - - - -### Reuse DevOps solutions - -One benefit to creating effective CI/CD pipelines is the reusability of those pipelines. Although there is no one-size fits all solution, anyone can adopt a process. There are several pre-made templates available for you to examine, such as build templates on VSTS, ARM templates to deploy Azure resources, and "cookbook"-style textbooks from technical publishers. Once it identifies a process that works well, a community can also create its own template by defining and establishing standards and making that template easily discoverable by the entire community. For more information on DevOps journeys and tools, check out [this site][3]. - -### Summary - -Overall, the success or failure of DevOps relies on the culture of a community. It doesn't matter if the community is a large, resource-rich enterprise or a small, resource-sparse, part-time group. DevOps will still bring solid benefits. The difference is in the approach for adoption and the scale of that adoption. There are both upfront and ongoing costs, but the value greatly outweighs those costs. Communities can use any of the powerful tools available today for their pipelines, and they can also leverage reusability, such as templates, to reduce upfront implementation costs. DevOps is most certainly feasible—and even critical—for the success of part-time community teams. - -**[See our related story,[How DevOps eliminates development bottlenecks][4].]** - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/devops-compatible-part-time-community-teams - -作者:[Edward Fry][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/edwardf -[1]:https://github.com/ALM-Rangers -[2]:https://opensource.com/article/17/11/devops-rangers-transformation -[3]:https://www.visualstudio.com/devops/ -[4]:https://opensource.com/article/17/11/devops-rangers-transformation diff --git a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md deleted file mode 100644 index 29e4ea2f48..0000000000 --- a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md +++ /dev/null @@ -1,109 +0,0 @@ -3 tips for organizing your open source project's workflow on GitHub -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -Managing an open source project is challenging work, and the challenges grow as a project grows. Eventually, a project may need to meet different requirements and span multiple repositories. These problems aren't technical, but they are important to solve to scale a technical project. [Business process management][1] methodologies such as agile and [kanban][2] bring a method to the madness. Developers and managers can make realistic decisions for estimating deadlines and team bandwidth with an organized development focus. - -At the [UNICEF Office of Innovation][3], we use GitHub project boards to organize development on the MagicBox project. [MagicBox][4] is a full-stack application and open source platform to serve and visualize data for decision-making in humanitarian crises and emergencies. The project spans multiple GitHub repositories and works with multiple developers. With GitHub project boards, we organize our work across multiple repositories to better understand development focus and team bandwidth. - -Here are three tips from the UNICEF Office of Innovation on how to organize your open source projects with the built-in project boards on GitHub. - -### 1\. Bring development discussion to issues and pull requests - -Transparency is a critical part of an open source community. When mapping out new features or milestones for a project, the community needs to see and understand a decision or why a specific direction was chosen. Filing new GitHub issues for features and milestones is an easy way for someone to follow the project direction. GitHub issues and pull requests are the cards (or building blocks) of project boards. To be successful with GitHub project boards, you need to use issues and pull requests. - - -![GitHub issues for magicbox-maps, MagicBox's front-end application][6] - -GitHub issues for magicbox-maps, MagicBox's front-end application. - -The UNICEF MagicBox team uses GitHub issues to track ongoing development milestones and other tasks to revisit. The team files new GitHub issues for development goals, feature requests, or bugs. These goals or features may come from external stakeholders or the community. We also use the issues as a place for discussion on those tasks. This makes it easy to cross-reference in the future and visualize upcoming work on one of our projects. - -Once you begin using GitHub issues and pull requests as a way of discussing and using your project, organizing with project boards becomes easier. - -### 2\. Set up kanban-style project boards - -GitHub issues and pull requests are the first step. After you begin using them, it may become harder to visualize what work is in progress and what work is yet to begin. [GitHub's project boards][7] give you a platform to visualize and organize cards into different columns. - -There are two types of project boards available: - - * **Repository** : Boards for use in a single repository - * **Organization** : Boards for use in a GitHub organization across multiple repositories (but private to organization members) - - - -The choice you make depends on the structure and size of your projects. The UNICEF MagicBox team uses boards for development and documentation at the organization level, and then repository-specific boards for focused work (like our [community management board][8]). - -#### Creating your first board - -Project boards are found on your GitHub organization page or on a specific repository. You will see the Projects tab in the same row as Issues and Pull requests. From the page, you'll see a green button to create a new project. - -There, you can set a name and description for the project. You can also choose templates to set up basic columns and sorting for your board. Currently, the only options are for kanban-style boards. - - -![Creating a new GitHub project board.][10] - -Creating a new GitHub project board. - -After creating the project board, you can make adjustments to it as needed. You can create new columns, [set up automation][11], and add pre-existing GitHub issues and pull requests to the project board. - -You may notice new options for the metadata in each GitHub issue and pull request. Inside of an issue or pull request, you can add it to a project board. If you use automation, it will automatically enter a column you configured. - -### 3\. Build project boards into your workflow - -After you set up a project board and populate it with issues and pull requests, you need to integrate it into your workflow. Project boards are effective only when actively used. The UNICEF MagicBox team uses the project boards as a way to track our progress as a team, update external stakeholders on development, and estimate team bandwidth for reaching our milestones. - - -![Tracking progress][13] - -Tracking progress with GitHub project boards. - -If you are an open source project and community, consider using the project boards for development-focused meetings. It also helps remind you and other core contributors to spend five minutes each day updating progress as needed. If you're at a company using GitHub to do open source work, consider using project boards to update other team members and encourage participation inside of GitHub issues and pull requests. - -Once you begin using the project board, yours may look like this: - - -![Development progress board][15] - -Development progress board for all UNICEF MagicBox repositories in organization-wide GitHub project boards. - -### Open alternatives - -GitHub project boards require your project to be on GitHub to take advantage of this functionality. While GitHub is a popular repository for open source projects, it's not an open source platform itself. Fortunately, there are open source alternatives to GitHub with tools to replicate the workflow explained above. [GitLab Issue Boards][16] and [Taiga][17] are good alternatives that offer similar functionality. - -### Go forth and organize! - -With these tools, you can bring a method to the madness of organizing your open source project. These three tips for using GitHub project boards encourage transparency in your open source project and make it easier to track progress and milestones in the open. - -Do you use GitHub project boards for your open source project? Have any tips for success that aren't mentioned in the article? Leave a comment below to share how you make sense of your open source projects. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/keep-your-project-organized-git-repo - -作者:[Justin W.Flory][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jflory -[1]:https://en.wikipedia.org/wiki/Business_process_management -[2]:https://en.wikipedia.org/wiki/Kanban_(development) -[3]:http://unicefstories.org/about/ -[4]:http://unicefstories.org/magicbox/ -[5]:/file/393356 -[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-open-issues.png?itok=OcWPX575 (GitHub issues for magicbox-maps, MagicBox's front-end application) -[7]:https://help.github.com/articles/about-project-boards/ -[8]:https://github.com/unicef/magicbox/projects/3?fullscreen=true -[9]:/file/393361 -[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-create-board.png?itok=pp7SXH9g (Creating a new GitHub project board.) -[11]:https://help.github.com/articles/about-automation-for-project-boards/ -[12]:/file/393351 -[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-issues-metadata.png?itok=xp5auxCQ (Tracking progress) -[14]:/file/393366 -[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-overview.png?itok=QSbOOOkF (Development progress board) -[16]:https://about.gitlab.com/features/issueboard/ -[17]:https://taiga.io/ diff --git a/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md b/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md deleted file mode 100644 index 10511c3a7d..0000000000 --- a/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md +++ /dev/null @@ -1,39 +0,0 @@ -What You Don’t Know About Linux Open Source Could Be Costing to More Than You Think -====== - -If you would like to test out Linux before completely switching it as your everyday driver, there are a number of means by which you can do it. Linux was not intended to run on Windows, and Windows was not meant to host Linux. To begin with, and perhaps most of all, Linux is open source computer software. In any event, Linux outperforms Windows on all your hardware. - -If you’ve always wished to try out Linux but were never certain where to begin, have a look at our how to begin guide for Linux. Linux is not any different than Windows or Mac OS, it’s basically an Operating System but the leading different is the fact that it is Free for everyone. Employing Linux today isn’t any more challenging than switching from one sort of smartphone platform to another. - -You’re most likely already using Linux, whether you are aware of it or not. Linux has a lot of distinct versions to suit nearly any sort of user. Today, Linux is a small no-brainer. Linux plays an essential part in keeping our world going. - -Even then, it is dependent on the build of Linux that you’re using. Linux runs a lot of the underbelly of cloud operations. Linux is also different in that, even though the core pieces of the Linux operating system are usually common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in reality very user-friendly, and it’s no longer the case you have to have advanced skills to get started using them. Linux was the very first major Internet-centred open-source undertaking. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only. - -You are able to remove Linux later in case you need to. Linux plays a vital part in keeping our world going. Linux supplies a huge library of functionality which can be leveraged to accelerate development. - -Even then, it’s dependent on the build of Linux that you’re using. Linux is also different in that, even though the core pieces of the Linux operating system are typically common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in fact very user-friendly, and it’s no longer the case you require to have advanced skills to get started using them. Linux runs a lot of the underbelly of cloud operations. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only. Read More, open source projects including Linux are incredibly capable because of the contributions that all these individuals have added over time. - -### Life After Linux Open Source - -The development edition of the manual typically has more documentation, but might also document new characteristics that aren’t in the released version. Fortunately, it’s so lightweight you can just jump to some other version in case you don’t like it. It’s extremely hard to modify the compiled version of the majority of applications and nearly not possible to see exactly the way the developer created different sections of the program. - -On the challenges of bottoms-up go-to-market It’s really really hard to grasp the difference between your organic product the product your developers use and love and your company product, which ought to be, effectively, a different product. As stated by the report, it’s going to be hard for developers to switch. Developers are now incredibly important and influential in the purchasing procedure. Some OpenWrt developers will attend the event and get ready to reply to your questions! - -When the program is installed, it has to be configured. Suppose you discover that the software you bought actually does not do what you would like it to do. Open source software is much more common than you believe, and an amazing philosophy to live by. Employing open source software gives an inexpensive method to bootstrap a business. It’s more difficult to deal with closed source software generally. So regarding Application and Software, you’re all set if you are prepared to learn an alternate software or finding a means to make it run on Linux. Possibly the most famous copyleft software is Linux. - -Article sponsored by [Vegas Palms online slots][1] - - --------------------------------------------------------------------------------- - -via: https://linuxaria.com/article/what-you-dont-know-about-linux-open-source-could-be-costing-to-more-than-you-think - -作者:[Marc Fisher][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxaria.com -[1]:https://www.vegaspalmscasino.com/casino-games/slots/ diff --git a/sources/talk/20180424 There-s a Server in Every Serverless Platform.md b/sources/talk/20180424 There-s a Server in Every Serverless Platform.md deleted file mode 100644 index 9bc935c06d..0000000000 --- a/sources/talk/20180424 There-s a Server in Every Serverless Platform.md +++ /dev/null @@ -1,87 +0,0 @@ -There’s a Server in Every Serverless Platform -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/servers.jpg?itok=i_gyObMP) -Serverless computing or Function as a Service (FaaS) is a new buzzword created by an industry that loves to coin new terms as market dynamics change and technologies evolve. But what exactly does it mean? What is serverless computing? - -Before getting into the definition, let’s take a brief history lesson from Sirish Raghuram, CEO and co-founder of Platform9, to understand the evolution of serverless computing. - -“In the 90s, we used to build applications and run them on hardware. Then came virtual machines that allowed users to run multiple applications on the same hardware. But you were still running the full-fledged OS for each application. The arrival of containers got rid of OS duplication and process level isolation which made it lightweight and agile,” said Raghuram. - -Serverless, specifically, Function as a Service, takes it to the next level as users are now able to code functions and run them at the granularity of build, ship and run. There is no complexity of underlying machinery needed to run those functions. No need to worry about spinning containers using Kubernetes. Everything is hidden behind the scenes. - -“That’s what is driving a lot of interest in function as a service,” said Raghuram. - -### What exactly is serverless? - -There is no single definition of the term, but to build some consensus around the idea, the [Cloud Native Computing Foundation (CNCF)][1] Serverless Working Group wrote a [white paper][2] to define serverless computing. - -According to the white paper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.” - -Ken Owens, a member of the Technical Oversight Committee at CNCF said that the primary goal of serverless computing is to help users build and run their applications without having to worry about the cost and complexity of servers in terms of provisioning, management and scaling. - -“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” [said][3] Chris Aniszczyk, COO, CNCF. - -### It’s not without servers - -First things first, don’t get fooled by the term “serverless.” There are still servers in serverless computing. Remember what Raghuram said: all the machinery is hidden; it’s not gone. - -The clear benefit here is that developers need not concern themselves with tasks that don’t add any value to their deliverables. Instead of worrying about managing the function, they can dedicate their time to adding featured and building apps that add business value. Time is money and every minute saved in management goes toward innovation. Developers don’t have to worry about scaling based on peaks and valleys; it’s automated. Because cloud providers charge only for the duration that functions are run, developers cut costs by not having to pay for blinking lights. - -But… someone still has to do the work behind the scenes. There are still servers offering FaaS platforms. - -In the case of public cloud offerings like Google Cloud Platform, AWS, and Microsoft Azure, these companies manage the servers and charge customers for running those functions. In the case of private cloud or datacenters, where developers don’t have to worry about provisioning or interacting with such servers, there are other teams who do. - -The CNCF white paper identifies two groups of professionals that are involved in the serverless movement: developers and providers. We have already talked about developers. But, there are also providers that offer serverless platforms; they deal with all the work involved in keeping that server running. - -That’s why many companies, like SUSE, refrain from using the term “serverless” and prefer the term function as a service, because they offer products that run those “serverless” servers. But what kind of functions are these? Is it the ultimate future of app delivery? - -### Event-driven computing - -Many see serverless computing as an umbrella that offers FaaS among many other potential services. According to CNCF, FaaS provides event-driven computing where functions are triggered by events or HTTP requests. “Developers run and manage application code with functions that are triggered by events or HTTP requests. Developers deploy small units of code to the FaaS, which are executed as needed as discrete actions, scaling without the need to manage servers or any other underlying infrastructure,” said the white paper. - -Does that mean FaaS is the silver bullet that solves all problems for developing and deploying applications? Not really. At least not at the moment. FaaS does solve problems in several use cases and its scope is expanding. A good use case of FaaS could be the functions that an application needs to run when an event takes place. - -Let’s take an example: a user takes a picture from a phone and uploads it to the cloud. Many things happen when the picture is uploaded - it’s scanned (exif data is read), a thumbnail is created, based on deep learning/machine learning the content of the image is analyzed, the information of the image is stored in the database. That one event of uploading that picture triggers all those functions. Those functions die once the event is over. That’s what FaaS does. It runs code quickly to perform all those tasks and then disappears. - -That’s just one example. Another example could be an IoT device where a motion sensor triggers an event that instructs the camera to start recording and sends the clip to the designated contant. Your thermostat may trigger the fan when the sensor detects a change in temperature. These are some of the many use cases where function as a service make more sense than the traditional approach. Which also says that not all applications (at least at the moment, but that will change as more organizations embrace the serverless platform) can be run as function as service. - -According to CNCF, serverless computing should be considered if you have these kinds of workloads: - - * Asynchronous, concurrent, easy to parallelize into independent units of work - - * Infrequent or has sporadic demand, with large, unpredictable variance in scaling requirements - - * Stateless, ephemeral, without a major need for instantaneous cold start time - - * Highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity - - - - -### Why should you care? - -Serverless is a very new technology and paradigm, just the way VMs and containers transformed the app development and delivery models, FaaS can also bring dramatic changes. We are still in the early days of serverless computing. As the market evolves, consensus is created and new technologies evolve, and FaaS may grow beyond the workloads and use cases mentioned here. - -What is becoming quite clear is that companies who are embarking on their cloud native journey must have serverless computing as part of their strategy. The only way to stay ahead of competitors is by keeping up with the latest technologies and trends. - -It’s about time to put serverless into servers. - -For more information, check out the CNCF Working Group's serverless whitepaper [here][2]. And, you can learn more at [KubeCon + CloudNativeCon Europe][4], coming up May 2-4 in Copenhagen, Denmark. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/4/theres-server-every-serverless-platform - -作者:[SWAPNIL BHARTIYA][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/arnieswap -[1]:https://www.cncf.io/ -[2]:https://github.com/cncf/wg-serverless/blob/master/whitepaper/cncf_serverless_whitepaper_v1.0.pdf -[3]:https://www.cncf.io/blog/2018/02/14/cncf-takes-first-step-towards-serverless-computing/ -[4]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/attend/register/ diff --git a/sources/talk/20180511 Looking at the Lispy side of Perl.md b/sources/talk/20180511 Looking at the Lispy side of Perl.md deleted file mode 100644 index 1fa51b314c..0000000000 --- a/sources/talk/20180511 Looking at the Lispy side of Perl.md +++ /dev/null @@ -1,357 +0,0 @@ -Looking at the Lispy side of Perl -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) -Some programming languages (e.g., C) have named functions only, whereas others (e.g., Lisp, Java, and Perl) have both named and unnamed functions. A lambda is an unnamed function, with Lisp as the language that popularized the term. Lambdas have various uses, but they are particularly well-suited for data-rich applications. Consider this depiction of a data pipeline, with two processing stages shown: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/data_source.png?itok=OON2cC2R) - -### Lambdas and higher-order functions - -The filter and transform stages can be implemented as higher-order functions—that is, functions that can take a function as an argument. Suppose that the depicted pipeline is part of an accounts-receivable application. The filter stage could consist of a function named `filter_data`, whose single argument is another function—for example, a `high_buyers` function that filters out amounts that fall below a threshold. The transform stage might convert amounts in U.S. dollars to equivalent amounts in euros or some other currency, depending on the function plugged in as the argument to the higher-order `transform_data` function. Changing the filter or the transform behavior requires only plugging in a different function argument to the higher order `filter_data` or `transform_data` functions. - -Lambdas serve nicely as arguments to higher-order functions for two reasons. First, lambdas can be crafted on the fly, and even written in place as arguments. Second, lambdas encourage the coding of pure functions, which are functions whose behavior depends solely on the argument(s) passed in; such functions have no side effects and thereby promote safe concurrent programs. - -Perl has a straightforward syntax and semantics for lambdas and higher-order functions, as shown in the following example: - -### A first look at lambdas in Perl - -``` -#!/usr/bin/perl - -use strict; -use warnings; - -## References to lambdas that increment, decrement, and do nothing. -## $_[0] is the argument passed to each lambda. -my $inc = sub { $_[0] + 1 };  ## could use 'return $_[0] + 1' for clarity -my $dec = sub { $_[0] - 1 };  ## ditto -my $nop = sub { $_[0] };      ## ditto - -sub trace { -    my ($val, $func, @rest) = @_; -    print $val, " ", $func, " ", @rest, "\nHit RETURN to continue...\n"; -    <STDIN>; -} - -## Apply an operation to a value. The base case occurs when there are -## no further operations in the list named @rest. -sub apply { -    my ($val, $first, @rest) = @_; -    trace($val, $first, @rest) if 1;  ## 0 to stop tracing - -    return ($val, apply($first->($val), @rest)) if @rest; ## recursive case -    return ($val, $first->($val));                        ## base case -} - -my $init_val = 0; -my @ops = (                        ## list of lambda references -    $inc, $dec, $dec, $inc, -    $inc, $inc, $inc, $dec, -    $nop, $dec, $dec, $nop, -    $nop, $inc, $inc, $nop -    ); - -## Execute. -print join(' ', apply($init_val, @ops)), "\n"; -## Final line of output: 0 1 0 -1 0 1 2 3 2 2 1 0 0 0 1 2 2strictwarningstraceSTDINapplytraceapplyapply -``` - -The lispy program shown above highlights the basics of Perl lambdas and higher-order functions. Named functions in Perl start with the keyword `sub` followed by a name: -``` -sub increment { ... }   # named function - -``` - -An unnamed or anonymous function omits the name: -``` -sub {...}               # lambda, or unnamed function - -``` - -In the lispy example, there are three lambdas, and each has a reference to it for convenience. Here, for review, is the `$inc` reference and the lambda referred to: -``` -my $inc = sub { $_[0] + 1 }; - -``` - -The lambda itself, the code block to the right of the assignment operator `=`, increments its argument `$_[0]` by 1. The lambda’s body is written in Lisp style; that is, without either an explicit `return` or a semicolon after the incrementing expression. In Perl, as in Lisp, the value of the last expression in a function’s body becomes the returned value if there is no explicit `return` statement. In this example, each lambda has only one expression in its body—a simplification that befits the spirit of lambda programming. - -The `trace` function in the lispy program helps to clarify how the program works (as I'll illustrate below). The higher-order function `apply`, a nod to a Lisp function of the same name, takes a numeric value as its first argument and a list of lambda references as its second argument. The `apply` function is called initially, at the bottom of the program, with zero as the first argument and the list named `@ops` as the second argument. This list consists of 16 lambda references from among `$inc` (increment a value), `$dec` (decrement a value), and `$nop` (do nothing). The list could contain the lambdas themselves, but the code is easier to write and to understand with the more concise lambda references. - -The logic of the higher-order `apply` function can be clarified as follows: - - 1. The argument list passed to `apply` in typical Perl fashion is separated into three pieces: -``` -my ($val, $first, @rest) = @_; ## break the argument list into three elements - -``` - -The first element `$val` is a numeric value, initially `0`. The second element `$first` is a lambda reference, one of `$inc` `$dec`, or `$nop`. The third element `@rest` is a list of any remaining lambda references after the first such reference is extracted as `$first`. - - 2. If the list `@rest` is not empty after its first element is removed, then `apply` is called recursively. The two arguments to the recursively invoked `apply` are: - - * The value generated by applying lambda operation `$first` to numeric value `$val`. For example, if `$first` is the incrementing lambda to which `$inc` refers, and `$val` is 2, then the new first argument to `apply` would be 3. - * The list of remaining lambda references. Eventually, this list becomes empty because each call to `apply` shortens the list by extracting its first element. - - - -Here is some output from a sample run of the lispy program, with `%` as the command-line prompt: -``` -% ./lispy.pl - -0 CODE(0x8f6820) CODE(0x8f68c8)CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)... -Hit RETURN to continue... - -1 CODE(0x8f68c8) CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)... -Hit RETURN to continue -``` - -The first output line can be clarified as follows: - - * The `0` is the numeric value passed as an argument in the initial (and thus non-recursive) call to function `apply`. The argument name is `$val` in `apply`. - * The `CODE(0x8f6820)` is a reference to one of the lambdas, in this case the lambda to which `$inc` refers. The second argument is thus the address of some lambda code. The argument name is `$first` in `apply` - * The third piece, the series of `CODE` references, is the list of lambda references beyond the first. The argument name is `@rest` in `apply`. - - - -The second line of output shown above also deserves a look. The numeric value is now `1`, the result of incrementing `0`: the initial lambda is `$inc` and the initial value is `0`. The extracted reference `CODE(0x8f68c8)` is now `$first`, as this reference is the first element in the `@rest` list after `$inc` has been extracted earlier. - -Eventually, the `@rest` list becomes empty, which ends the recursive calls to `apply`. In this case, the function `apply` simply returns a list with two elements: - - 1. The numeric value taken in as an argument (in the sample run, 2). - 2. This argument transformed by the lambda (also 2 because the last lambda reference happens to be `$nop` for do nothing). - - - -The lispy example underscores that Perl supports lambdas without any special fussy syntax: A lambda is just an unnamed code block, perhaps with a reference to it for convenience. Lambdas themselves, or references to them, can be passed straightforwardly as arguments to higher-order functions such as `apply` in the lispy example. Invoking a lambda through a reference is likewise straightforward. In the `apply` function, the call is: -``` -$first->($val)    ## $first is a lambda reference, $val a numeric argument passed to the lambda - -``` - -### A richer code example - -The next code example puts a lambda and a higher-order function to practical use. The example implements Conway’s Game of Life, a cellular automaton that can be represented as a matrix of cells. Such a matrix goes through various transformations, each yielding a new generation of cells. The Game of Life is fascinating because even relatively simple initial configurations can lead to quite complex behavior. A quick look at the rules governing cell birth, survival, and death is in order. - -Consider this 5x5 matrix, with a star representing a live cell and a dash representing a dead one: -``` - -----              ## initial configuration - --*-- - --*-- - --*-- - ----- -``` - -The next generation becomes: -``` - -----              ## next generation - ----- - -***- - ---- - ----- -``` - -As life continues, the generations oscillate between these two configurations. - -Here are the rules determining birth, death, and survival for a cell. A given cell has between three neighbors (a corner cell) and eight neighbors (an interior cell): - - * A dead cell with exactly three live neighbors comes to life. - * A live cell with more than three live neighbors dies from over-crowding. - * A live cell with two or three live neighbors survives; hence, a live cell with fewer than two live neighbors dies from loneliness. - - - -In the initial configuration shown above, the top and bottom live cells die because neither has two or three live neighbors. By contrast, the middle live cell in the initial configuration gains two live neighbors, one on either side, in the next generation. - -## Conway’s Game of Life -``` -#!/usr/bin/perl - -### A simple implementation of Conway's game of life. -# Usage: ./gol.pl [input file]  ;; If no file name given, DefaultInfile is used. - -use constant Dead  => "-"; -use constant Alive => "*"; -use constant DefaultInfile => 'conway.in'; - -use strict; -use warnings; - -my $dimension = undef; -my @matrix = (); -my $generation = 1; - -sub read_data { -    my $datafile = DefaultInfile; -    $datafile = shift @ARGV if @ARGV; -    die "File $datafile does not exist.\n" if !-f $datafile; -    open(INFILE, "<$datafile"); - -    ## Check 1st line for dimension; -    $dimension = <INFILE>; -    die "1st line of input file $datafile not an integer.\n" if $dimension !~ /\d+/; - -    my $record_count = 0; -    while (<INFILE>) { -        chomp($_); -        last if $record_count++ == $dimension; -        die "$_: bad input record -- incorrect length\n" if length($_) != $dimension; -        my @cells = split(//, $_); -        push @matrix, @cells; -    } -    close(INFILE); -    draw_matrix(); -} - -sub draw_matrix { -    my $n = $dimension * $dimension; -    print "\n\tGeneration $generation\n"; -    for (my $i = 0; $i < $n; $i++) { -        print "\n\t" if ($i % $dimension) == 0; -        print $matrix[$i]; -    } -    print "\n\n"; -    $generation++; -} - -sub has_left_neighbor { -    my ($ind) = @_; -    return ($ind % $dimension) != 0; -} - -sub has_right_neighbor { -    my ($ind) = @_; -    return (($ind + 1) % $dimension) != 0; -} - -sub has_up_neighbor { -    my ($ind) = @_; -    return (int($ind / $dimension)) != 0; -} - -sub has_down_neighbor { -    my ($ind) = @_; -    return (int($ind / $dimension) + 1) != $dimension; -} - -sub has_left_up_neighbor { -    my ($ind) = @_; -    ($ind) && has_up_neighbor($ind); -} - -sub has_right_up_neighbor { -    my ($ind) = @_; -    ($ind) && has_up_neighbor($ind); -} - -sub has_left_down_neighbor { -    my ($ind) = @_; -    ($ind) && has_down_neighbor($ind); -} - -sub has_right_down_neighbor { -    my ($ind) = @_; -    ($ind) && has_down_neighbor($ind); -} - -sub compute_cell { -    my ($ind) = @_; -    my @neighbors; - -    # 8 possible neighbors -    push(@neighbors, $ind - 1) if has_left_neighbor($ind); -    push(@neighbors, $ind + 1) if has_right_neighbor($ind); -    push(@neighbors, $ind - $dimension) if has_up_neighbor($ind); -    push(@neighbors, $ind + $dimension) if has_down_neighbor($ind); -    push(@neighbors, $ind - $dimension - 1) if has_left_up_neighbor($ind); -    push(@neighbors, $ind - $dimension + 1) if has_right_up_neighbor($ind); -    push(@neighbors, $ind + $dimension - 1) if has_left_down_neighbor($ind); -    push(@neighbors, $ind + $dimension + 1) if has_right_down_neighbor($ind); - -    my $count = 0; -    foreach my $n (@neighbors) { -        $count++ if $matrix[$n] eq Alive; -    } - -    if ($matrix[$ind] eq Alive) && (($count == 2) || ($count == 3)); ## survival -    if ($matrix[$ind] eq Dead)  && ($count == 3);                    ## birth -    ;                                                                  ## death -} - -sub again_or_quit { -    print "RETURN to continue, 'q' to quit.\n"; -    my $flag = <STDIN>; -    chomp($flag); -    return ($flag eq 'q') ? 1 : 0; -} - -sub animate { -    my @new_matrix; -    my $n = $dimension * $dimension - 1; - -    while (1) {                                       ## loop until user signals stop -        @new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrix - -        splice @matrix;                               ## empty current matrix -        push @matrix, @new_matrix;                    ## repopulate matrix -        draw_matrix();                                ## display the current matrix - -        last if again_or_quit();                      ## continue? -        splice @new_matrix;                           ## empty temp matrix -    } -} - -### Execute -read_data();  ## read initial configuration from input file -animate();    ## display and recompute the matrix until user tires -``` - -The gol program (see [Conway’s Game of Life][1]) has almost 140 lines of code, but most of these involve reading the input file, displaying the matrix, and bookkeeping tasks such as determining the number of live neighbors for a given cell. Input files should be configured as follows: -``` - 5 - ----- - --*-- - --*-- - --*-- - ----- -``` - -The first record gives the matrix side, in this case 5 for a 5x5 matrix. The remaining rows are the contents, with stars for live cells and spaces for dead ones. - -The code of primary interest resides in two functions, `animate` and `compute_cell`. The `animate` function constructs the next generation, and this function needs to call `compute_cell` on every cell in order to determine the cell’s new status as either alive or dead. How should the `animate` function be structured? - -The `animate` function has a `while` loop that iterates until the user decides to terminate the program. Within this `while` loop the high-level logic is straightforward: - - 1. Create the next generation by iterating over the matrix cells, calling function `compute_cell` on each cell to determine its new status. At issue is how best to do the iteration. A loop nested inside the `while `loop would do, of course, but nested loops can be clunky. Another way is to use a higher-order function, as clarified shortly. - 2. Replace the current matrix with the new one. - 3. Display the next generation. - 4. Check if the user wants to continue: if so, continue; otherwise, terminate. - - - -Here, for review, is the call to Perl’s higher-order `map` function, with the function’s name again a nod to Lisp. This call occurs as the first statement within the `while` loop in `animate`: -``` -while (1) { -    @new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrixcompute_cell -``` - -The `map` function takes two arguments: an unnamed code block (a lambda!), and a list of values passed to this code block one at a time. In this example, the code block calls the `compute_cell` function with one of the matrix indexes, 0 through the matrix size - 1. Although the matrix is displayed as two-dimensional, it is implemented as a one-dimensional list. - -Higher-order functions such as `map` encourage the code brevity for which Perl is famous. My view is that such functions also make code easier to write and to understand, as they dispense with the required but messy details of loops. In any case, lambdas and higher-order functions make up the Lispy side of Perl. - -If you're interested in more detail, I recommend Mark Jason Dominus's book, [Higher-Order Perl][2]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/looking-lispy-side-perl - -作者:[Marty Kalin][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mkalindepauledu -[1]:https://trello-attachments.s3.amazonaws.com/575088ec94ca6ac38b49b30e/5ad4daf12f6b6a3ac2318d28/c0700c7379983ddf61f5ab5ab4891f0c/lispyPerl.html#gol (Conway’s Game of Life) -[2]:https://www.elsevier.com/books/higher-order-perl/dominus/978-1-55860-701-9 diff --git a/sources/talk/20180527 Whatever Happened to the Semantic Web.md b/sources/talk/20180527 Whatever Happened to the Semantic Web.md deleted file mode 100644 index 22d48c150a..0000000000 --- a/sources/talk/20180527 Whatever Happened to the Semantic Web.md +++ /dev/null @@ -1,106 +0,0 @@ -Whatever Happened to the Semantic Web? -====== -In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine. - -They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this: - -> The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Mom’s prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules. - -The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web. - -![][1] - -For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.” - -Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Today’s physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…” - -In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn? - -### Semweb Hucksters and Their Metacrap - -To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream. - -The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans. - -The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember. - -Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users. - -Indeed, the web had already seen people abusing the HTML `` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science. - -Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand. - -### Building the Semantic Web - -If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert. - -The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future. - -The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge. - -RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies: - -``` -@prefix rdf: . - -@prefix ex: . - - -ex:vincent_donofrio ex:starred_in ex:law_and_order_ci . - -ex:law_and_order_ci rdf:type ex:tv_show . - -ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix . -``` - -The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.) - -Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information. - -In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project. - -Berners-Lee’s article launched the second phase of the Semantic Web’s development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3C’s new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system. - -![][4] - -The third phase of the Semantic Web’s development involved adapting the W3C’s standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `