Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-09-17 17:45:13 +08:00
commit 97d40bc315
28 changed files with 5288 additions and 47 deletions

View File

@ -1,37 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11352-1.html)
[#]: subject: (How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script)
[#]: via: (https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-data-report/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何使用 Bash 脚本从 SAR 报告中获取 CPU 和内存的平均使用情况
如何使用 Bash 脚本从 SAR 报告中获取 CPU 和内存使用情况
======
大多数 Linux 管理员使用 **[SAR 报告][1]**监控系统性能,因为它会收集一周的性能数据。
大多数 Linux 管理员使用 [SAR 报告][1]监控系统性能,因为它会收集一周的性能数据。但是,你可以通过更改 `/etc/sysconfig/sysstat` 文件轻松地将其延长到四周。同样,这段时间可以延长一个月以上。如果超过 28那么日志文件将放在多个目录中每月一个。
要将覆盖期延长至 28 天,请对 `/etc/sysconfig/sysstat` 文件做以下更改。
但是,你可以通过更改 “/etc/sysconfig/sysstat” 文件轻松地将其延长到四周。
同样,这段时间可以延长一个月以上。如果超过 28那么日志文件将放在多个目录中每月一个。
要将覆盖期延长至 28 天,请对 “/etc/sysconfig/sysstat” 文件做以下更改。
编辑 sysstat 文件并将 HISTORY=7 更改为 HISTORY=28.。
编辑 `sysstat` 文件并将 `HISTORY=7` 更改为 `HISTORY=28`
在本文中,我们添加了三个 bash 脚本,它们可以帮助你在一个地方轻松查看每个数据文件的平均值。
我们过去加过许多有用的 shell 脚本。如果你想查看它们,请进入下面的链接。
* **[如何使用 shell 脚本自动化日常操作][2]**
* [如何使用 shell 脚本自动化日常操作][2]
这些脚本简单明了。出于测试目的,我们仅包括两个性能指标,即 CPU 和内存。
你可以修改脚本中的其他性能指标以满足你的需求。
这些脚本简单明了。出于测试目的,我们仅包括两个性能指标,即 CPU 和内存。你可以修改脚本中的其他性能指标以满足你的需求。
### 脚本 1从 SAR 报告中获取平均 CPU 利用率的 Bash 脚本
@ -49,15 +40,10 @@ echo "|Average: CPU %user %nice %system %iowait %steal
echo "+----------------------------------------------------------------------------------+"
for file in `ls -tr /var/log/sa/sa* | grep -v sar`
do
dat=`sar -f $file | head -n 1 | awk '{print $4}'`
echo -n $dat
sar -f $file | grep -i Average | sed "s/Average://"
dat=`sar -f $file | head -n 1 | awk '{print $4}'`
echo -n $dat
sar -f $file | grep -i Average | sed "s/Average://"
done
echo "+----------------------------------------------------------------------------------+"
@ -105,15 +91,10 @@ echo "|Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
echo "+-------------------------------------------------------------------------------------------------------------------+"
for file in `ls -tr /var/log/sa/sa* | grep -v sar`
do
dat=`sar -f $file | head -n 1 | awk '{print $4}'`
echo -n $dat
sar -r -f $file | grep -i Average | sed "s/Average://"
dat=`sar -f $file | head -n 1 | awk '{print $4}'`
echo -n $dat
sar -r -f $file | grep -i Average | sed "s/Average://"
done
echo "+-------------------------------------------------------------------------------------------------------------------+"
@ -157,19 +138,12 @@ echo "+-------------------------------------------------------------------------
#!/bin/bash
for file in `ls -tr /var/log/sa/sa* | grep -v sar`
do
sar -f $file | head -n 1 | awk '{print $4}'
echo "-----------"
sar -u -f $file | awk '/Average:/{printf("CPU Average: %.2f%\n"), 100 - $8}'
sar -r -f $file | awk '/Average:/{printf("Memory Average: %.2f%\n"),(($3-$5-$6)/($2+$3)) * 100 }'
printf "\n"
sar -f $file | head -n 1 | awk '{print $4}'
echo "-----------"
sar -u -f $file | awk '/Average:/{printf("CPU Average: %.2f%\n"), 100 - $8}'
sar -r -f $file | awk '/Average:/{printf("Memory Average: %.2f%\n"),(($3-$5-$6)/($2+$3)) * 100 }'
printf "\n"
done
```
@ -223,7 +197,7 @@ via: https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,278 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Rise and Demise of RSS (Old Version))
[#]: via: (https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html)
[#]: author: (Two-Bit History https://twobithistory.org)
The Rise and Demise of RSS (Old Version)
======
_A newer version of this post was published on [December 18th, 2018][1]._
There are two stories here. The first is a story about a vision of the webs future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
In the late 1990s, in the go-go years between Netscapes IPO and the Dot-com crash, everyone could see that the web was going to be an even bigger deal than it already was, even if they didnt know exactly how it was going to get there. One theory was that the web was about to be revolutionized by syndication. The web, originally built to enable a simple transaction between two parties—a client fetching a document from a single host server—would be broken open by new standards that could be used to repackage and redistribute entire websites through a variety of channels. Kevin Werbach, writing for _Release 1.0_, a newsletter influential among investors in the 1990s, predicted that syndication “would evolve into the core model for the Internet economy, allowing businesses and individuals to retain control over their online personae while enjoying the benefits of massive scale and scope.”[1][2] He invited his readers to imagine a future in which fencing aficionados, rather than going directly to an “online sporting goods site” or “fencing equipment retailer,” could buy a new épée directly through e-commerce widgets embedded into their favorite website about fencing.[2][3] Just like in the television world, where big networks syndicate their shows to smaller local stations, syndication on the web would allow businesses and publications to reach consumers through a multitude of intermediary sites. This would mean, as a corollary, that consumers would gain significant control over where and how they interacted with any given business or publication on the web.
RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.”[3][4] Another contemporaneous article called RSS the first protocol to realize the potential of XML.[4][5] It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, RSS [appears to be a dying technology][6], now used chiefly by podcasters and programmers with tech blogs. Moreover, among that latter group, RSS is perhaps used as much for its political symbolism as its actual utility. Though of course some people really do have RSS readers, stubbornly adding an RSS feed to your blog, even in 2018, is a reactionary statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbachs imagining.
The future once looked so bright for RSS. What happened? Was its downfall inevitable, or was it precipitated by the bitter infighting that thwarted the development of a single RSS standard?
### Muddied Water
RSS was invented twice. This meant it never had an obvious owner, a state of affairs that spawned endless debate and acrimony. But it also suggests that RSS was an important idea whose time had come.
In 1998, Netscape was struggling to envision a future for itself. Its flagship product, the Netscape Navigator web browser—once preferred by 80% of web users—was quickly losing ground to Internet Explorer. So Netscape decided to compete in a new arena. In May, a team was brought together to start work on what was known internally as “Project 60.”[5][7] Two months later, Netscape announced “My Netscape,” a web portal that would fight it out with other portals like Yahoo, MSN, and Excite.
The following year, in March, Netscape announced an addition to the My Netscape portal called the “My Netscape Network.” My Netscape users could now customize their My Netscape page so that it contained “channels” featuring the most recent headlines from sites around the web. As long as your favorite website published a special file in a format dictated by Netscape, you could add that website to your My Netscape page, typically by clicking an “Add Channel” button that participating websites were supposed to add to their interfaces. A little box containing a list of linked headlines would then appear.
![A My Netscape Network Channel][8]
The special file that participating websites had to publish was an RSS file. In the My Netscape Network announcement, Netscape explained that RSS stood for “RDF Site Summary.”[6][9] This was somewhat of a misnomer. RDF, or the Resource Description Framework, is basically a grammar for describing certain properties of arbitrary resources. (See [my article about the Semantic Web][10] if that sounds really exciting to you.) In 1999, a draft specification for RDF was being considered by the W3C. Though RSS was supposed to be based on RDF, the example RSS document Netscape actually released didnt use any RDF tags at all, even if it declared the RDF XML namespace. In a document that accompanied the Netscape RSS specification, Dan Libby, one of the specifications authors, explained that “in this release of MNN, Netscape has intentionally limited the complexity of the RSS format.”[7][11] The specification was given the 0.90 version number, the idea being that subsequent versions would bring RSS more in line with the W3Cs XML specification and the evolving draft of the RDF specification.
RSS had been cooked up by Libby and another Netscape employee, Ramanathan Guha. Guha previously worked for Apple, where he came up with something called the Meta Content Framework. MCF was a format for representing metadata about anything from web pages to local files. Guha demonstrated its power by developing an application called [HotSauce][12] that visualized relationships between files as a network of nodes suspended in 3D space. After leaving Apple for Netscape, Guha worked with a Netscape consultant named Tim Bray to produce an XML-based version of MCF, which in turn became the foundation for the W3Cs RDF draft.[8][13] Its no surprise, then, that Guha and Libby were keen to incorporate RDF into RSS. But Libby later wrote that the original vision for an RDF-based RSS was pared back because of time constraints and the perception that RDF was “too complex for the average user.’”[9][14]
While Netscape was trying to win eyeballs in what became known as the “portal wars,” elsewhere on the web a new phenomenon known as “weblogging” was being pioneered.[10][15] One of these pioneers was Dave Winer, CEO of a company called UserLand Software, which developed early content management systems that made blogging accessible to people without deep technical fluency. Winer ran his own blog, [Scripting News][16], which today is one of the oldest blogs on the internet. More than a year before Netscape announced My Netscape Network, on December 15th, 1997, Winer published a post announcing that the blog would now be available in XML as well as HTML.[11][17]
Dave Winers XML format became known as the Scripting News format. It was supposedly similar to Microsofts Channel Definition Format (a “push technology” standard submitted to the W3C in March, 1997), but I havent been able to find a file in the original format to verify that claim.[12][18] Like Netscapes RSS, it structured the content of Winers blog so that it could be understood by other software applications. When Netscape released RSS 0.90, Winer and UserLand Software began to support both formats. But Winer believed that Netscapes format was “woefully inadequate” and “missing the key thing web writers and readers need.”[13][19] It could only represent a list of links, whereas the Scripting News format could represent a series of paragraphs, each containing one or more links.
In June, 1999, two months after Netscapes My Netscape Network announcement, Winer introduced a new version of the Scripting News format, called ScriptingNews 2.0b1. Winer claimed that he decided to move ahead with his own format only after trying but failing to get anyone at Netscape to care about RSS 0.90s deficiencies.[14][20] The new version of the Scripting News format added several items to the `<header>` element that brought the Scripting News format to parity with RSS. But the two formats continued to differ in that the Scripting News format, which Winer nicknamed the “fat” syndication format, could include entire paragraphs and not just links.
Netscape got around to releasing RSS 0.91 the very next month. The updated specification was a major about-face. RSS no longer stood for “RDF Site Summary”; it now stood for “Rich Site Summary.” All the RDF—and there was almost none anyway—was stripped out. Many of the Scripting News tags were incorporated. In the text of the new specification, Libby explained:
> RDF references removed. RSS was originally conceived as a metadata format providing a summary of a website. Two things have become clear: the first is that providers want more of a syndication format than a metadata format. The structure of an RDF file is very precise and must conform to the RDF data model in order to be valid. This is not easily human-understandable and can make it difficult to create useful RDF files. The second is that few tools are available for RDF generation, validation and processing. For these reasons, we have decided to go with a standard XML approach.[15][21]
Winer was enormously pleased with RSS 0.91, calling it “even better than I thought it would be.”[16][22] UserLand Software adopted it as a replacement for the existing ScriptingNews 2.0b1 format. For a while, it seemed that RSS finally had a single authoritative specification.
### The Great Fork
A year later, the RSS 0.91 specification had become woefully inadequate. There were all sorts of things people were trying to do with RSS that the specification did not address. There were other parts of the specification that seemed unnecessarily constraining—each RSS channel could only contain a maximum of 15 items, for example.
By that point, RSS had been adopted by several more organizations. Other than Netscape, which seemed to have lost interest after RSS 0.91, the big players were Dave Winers UserLand Software; OReilly Net, which ran an RSS aggregator called Meerkat; and Moreover.com, which also ran an RSS aggregator focused on news.[17][23] Via mailing list, representatives from these organizations and others regularly discussed how to improve on RSS 0.91. But there were deep disagreements about what those improvements should look like.
The mailing list in which most of the discussion occurred was called the Syndication mailing list. [An archive of the Syndication mailing list][24] is still available. It is an amazing historical resource. It provides a moment-by-moment account of how those deep disagreements eventually led to a political rupture of the RSS community.
On one side of the coming rupture was Winer. Winer was impatient to evolve RSS, but he wanted to change it only in relatively conservative ways. In June, 2000, he published his own RSS 0.91 specification on the UserLand website, meant to be a starting point for further development of RSS. It made no significant changes to the 0.91 specification published by Netscape. Winer claimed in a blog post that accompanied his specification that it was only a “cleanup” documenting how RSS was actually being used in the wild, which was needed because the Netscape specification was no longer being maintained.[18][25] In the same post, he argued that RSS had succeeded so far because it was simple, and that by adding namespaces or RDF back to the format—some had suggested this be done in the Syndication mailing list—it “would become vastly more complex, and IMHO, at the content provider level, would buy us almost nothing for the added complexity.” In a message to the Syndication mailing list sent around the same time, Winer suggested that these issues were important enough that they might lead him to create a fork:
> Im still pondering how to move RSS forward. I definitely want ICE-like stuff in RSS2, publish and subscribe is at the top of my list, but I am going to fight tooth and nail for simplicity. I love optional elements. I dont want to go down the namespaces and schema road, or try to make it a dialect of RDF. I understand other people want to do this, and therefore I guess were going to get a fork. I have my own opinion about where the other fork will lead, but Ill keep those to myself for the moment at least.[19][26]
Arrayed against Winer were several other people, including Rael Dornfest of OReilly, Ian Davis (responsible for a search startup called Calaba), and a precocious, 14-year-old Aaron Swartz, who all thought that RSS needed namespaces in order to accommodate the many different things everyone wanted to do with it. On another mailing list hosted by OReilly, Davis proposed a namespace-based module system, writing that such a system would “make RSS as extensible as we like rather than packing in new features that over-complicate the spec.”[20][27] The “namespace camp” believed that RSS would soon be used for much more than the syndication of blog posts, so namespaces, rather than being a complication, were the only way to keep RSS from becoming unmanageable as it supported more and more use cases.
At the root of this disagreement about namespaces was a deeper disagreement about what RSS was even for. Winer had invented his Scripting News format to syndicate the posts he wrote for his blog. Guha and Libby at Netscape had designed RSS and called it “RDF Site Summary” because in their minds it was a way of recreating a site in miniature within Netscapes online portal. Davis, writing to the Syndication mailing list, explained his view that RSS was “originally conceived as a way of building mini sitemaps,” and that now he and others wanted to expand RSS “to encompass more types of information than simple news headlines and to cater for the new uses of RSS that have emerged over the last 12 months.”[21][28] Winer wrote a prickly reply, stating that his Scripting News format was in fact the original RSS and that it had been meant for a different purpose. Given that the people most involved in the development of RSS disagreed about why RSS had even been created, a fork seems to have been inevitable.
The fork happened after Dornfest announced a proposed RSS 1.0 specification and formed the RSS-DEV Working Group—which would include Davis, Swartz, and several others but not Winer—to get it ready for publication. In the proposed specification, RSS once again stood for “RDF Site Summary,” because RDF had had been added back in to represent metadata properties of certain RSS elements. The specification acknowledged Winer by name, giving him credit for popularizing RSS through his “evangelism.”[22][29] But it also argued that just adding more elements to RSS without providing for extensibility with a module system—that is, what Winer was suggesting—”sacrifices scalability.” The specification went on to define a module system for RSS based on XML namespaces.
Winer was furious that the RSS-DEV Working Group had arrogated the “RSS 1.0” name for themselves.[23][30] In another mailing list about decentralization, he described what the RSS-DEV Working Group had done as theft.[24][31] Other members of the Syndication mailing list also felt that the RSS-DEV Working Group should not have used the name “RSS” without unanimous agreement from the community on how to move RSS forward. But the Working Group stuck with the name. Dan Brickley, another member of the RSS-DEV Working Group, defended this decision by arguing that “RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc).”[25][32] He essentially felt that the RSS 1.0 effort had a better claim to the RSS name than Winer did, since RDF had originally been a part of RSS. The RSS-DEV Working Group published a final version of their specification in December. That same month, Winer published his own improvement to RSS 0.91, which he called RSS 0.92, on UserLands website. RSS 0.92 made several small optional improvements to RSS, among which was the addition of the `<enclosure>` tag soon used by podcasters everywhere. RSS had officially forked.
Its not clear to me why a better effort was not made to involve Winer in the RSS-DEV Working Group. He was a prominent contributor to the Syndication mailing list and obviously responsible for much of RSS popularity, as the members of the Working Group themselves acknowledged. But Tim OReilly, founder and CEO of OReilly, explained in a UserLand discussion group that Winer more or less refused to participate:
> A group of people involved in RSS got together to start thinking about its future evolution. Dave was part of the group. When the consensus of the group turned in a direction he didnt like, Dave stopped participating, and characterized it as a plot by OReilly to take over RSS from him, despite the fact that Rael Dornfest of OReilly was only one of about a dozen authors of the proposed RSS 1.0 spec, and that many of those who were part of its development had at least as long a history with RSS as Dave had.[26][33]
To this, Winer said:
> I met with Dale [Dougherty] two weeks before the announcement, and he didnt say anything about it being called RSS 1.0. I spoke on the phone with Rael the Friday before it was announced, again he didnt say that they were calling it RSS 1.0. The first I found out about it was when it was publicly announced.
>
> Let me ask you a straight question. If it turns out that the plan to call the new spec “RSS 1.0” was done in private, without any heads-up or consultation, or for a chance for the Syndication list members to agree or disagree, not just me, what are you going to do?
>
> UserLand did a lot of work to create and popularize and support RSS. We walked away from that, and let your guys have the name. Thats the top level. If I want to do any further work in Web syndication, I have to use a different name. Why and how did that happen Tim?[27][34]
I have not been able to find a discussion in the Syndication mailing list about using the RSS 1.0 name prior to the announcement of the RSS 1.0 proposal.
RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by [a proposed IETF standard][35]. After the introduction of Atom, there were three competing versions of RSS: Winers RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Groups RSS 1.0, and Atom.
### Decline
The proliferation of competing RSS specifications may have hampered RSS in other ways that Ill discuss shortly. But it did not stop RSS from becoming enormously popular during the 2000s. By 2004, the New York Times had started offering its headlines in RSS and had written an article explaining to the layperson what RSS was and how to use it.[28][36] Google Reader, an RSS aggregator ultimately used by millions, was launched in 2005. By 2013, RSS seemed popular enough that the New York Times, in its obituary for Aaron Swartz, called the technology “ubiquitous.”[29][37] For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
The New York Times published Swartz obituary in January, 2013. By that point, though, RSS had actually turned a corner and was well on its way to becoming an obscure technology. Google Reader was shutdown in July, 2013, ostensibly because user numbers had been falling “over the years.”[30][38] This prompted several articles from various outlets declaring that RSS was dead. But people had been declaring that RSS was dead for years, even before Google Readers shuttering. Steve Gillmor, writing for TechCrunch in May, 2009, advised that “its time to get completely off RSS and switch to Twitter” because “RSS just doesnt cut it anymore.”[31][39] He pointed out that Twitter was basically a better RSS feed, since it could show you what people thought about an article in addition to the article itself. It allowed you to follow people and not just channels. Gillmor told his readers that it was time to let RSS recede into the background. He ended his article with a verse from Bob Dylans “Forever Young.”
Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+. Google might have been able to monetize Google+ in a way that it could never have monetized Google Reader. Marco Arment, the creator of Instapaper, wrote on his blog in 2013:
> Google Reader is just the latest casualty of the war that Facebook started, seemingly accidentally: the battle to own everything. While Google did technically “own” Reader and could make some use of the huge amount of news and attention data flowing through it, it conflicted with their far more important Google+ strategy: they need everyone reading and sharing everything through Google+ so they can compete with Facebook for ad-targeting data, ad dollars, growth, and relevance.[32][40]
So both users and technology companies realized that they got more out of using social networks than they did out of RSS.
Another theory is that RSS was always too geeky for regular people. Even the New York Times, which seems to have been eager to adopt RSS and promote it to its audience, complained in 2006 that RSS is a “not particularly user friendly” acronym coined by “computer geeks.”[33][41] Before the RSS icon was designed in 2004, websites like the New York Times linked to their RSS feeds using little orange boxes labeled “XML,” which can only have been intimidating.[34][42] The label was perfectly accurate though, because back then clicking the link would take a hapless user to a page full of XML. [This great tweet][43] captures the essence of this explanation for RSS demise. Regular people never felt comfortable using RSS; it hadnt really been designed as a consumer-facing technology and involved too many hurdles; people jumped ship as soon as something better came along.
RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. The Great RSS Fork only demonstrates how difficult it was to do that. So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently its the politics far more than the serialization thats far from simple.”[35][44]
So today we are left with centralized silos of information. In a way, we _do_ have the syndicated internet that Kevin Werbach foresaw in 1999. After all, _The Onion_ is a publication that relies on syndication through Facebook and Twitter the same way that Seinfeld relied on syndication to rake in millions after the end of its original run. But syndication on the web only happens through one of a very small number of channels, meaning that none of us “retain control over our online personae” the way that Werbach thought we would. One reason this happened is garden-variety corporate rapaciousness—RSS, an open format, didnt give technology companies the control over data and eyeballs that they needed to sell ads, so they did not support it. But the more mundane reason is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][45] on Twitter or subscribe to the [RSS feed][46] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> New post: This week we're traveling back in time in our DeLorean to see what it was like learning to program on early home computers.<https://t.co/qDrwqgIuuy>
>
> — TwoBitHistory (@TwoBitHistory) [September 2, 2018][47]
1. Kevin Werbach, “The Web Goes into Syndication,” Release 1.0, July 22, 1999, 1, accessed September 14, 2018, <http://cdn.oreillystatic.com/radar/r1/07-99.pdf>. [↩︎][48]
2. ibid. [↩︎][49]
3. Werbach, 8. [↩︎][50]
4. Peter Wiggin, “RSS Delivers the XML Promise,” Web Review, October 29, 1999, accessed September 14, 2018, <https://people.apache.org/~jim/NewArchitect/webrevu/1999/10_29/webauthors/10_29_99_2a.html>. [↩︎][51]
5. Ben Hammersley, RSS and Atom (OReilly), 8, accessed September 14, 2018, <https://books.google.com/books?id=kwJVAgAAQBAJ>. [↩︎][52]
6. “RSS 0.90 Specification,” RSS Advisory Board, accessed September 14, 2018, <http://www.rssboard.org/rss-0-9-0>. [↩︎][53]
7. “My Netscape Network Future Directions,” RSS Advisory Board, accessed September 14, 2018, <http://www.rssboard.org/mnn-futures>. [↩︎][54]
8. Tim Bray, “The RDF.net Challenge,” Ongoing by Tim Bray, May 21, 2003, accessed September 14, 2018, <https://www.tbray.org/ongoing/When/200x/2003/05/21/RDFNet>. [↩︎][55]
9. Dan Libby, “RSS: Introducing Myself,” August 24, 2000, RSS-DEV Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/rss-dev/conversations/topics/239>. [↩︎][56]
10. Alexandra Krasne, “Browser Wars May Become Portal Wars,” CNN, accessed September 14, 2018, <http://www.cnn.com/TECH/computing/9910/04/portal.war.idg/index.html>. [↩︎][57]
11. Dave Winer, “Scripting News in XML,” Scripting News, December 15, 1997, accessed September 14, 2018, <http://scripting.com/davenet/1997/12/15/scriptingNewsInXML.html>. [↩︎][58]
12. Joseph Reagle, “RSS History,” 2004, accessed September 14, 2018, <https://reagle.org/joseph/2003/rss-history.html>. [↩︎][59]
13. Dave Winer, “A Faceoff with Netscape,” Scripting News, June 16, 1999, accessed September 14, 2018, <http://scripting.com/davenet/1999/06/16/aFaceOffWithNetscape.html>. [↩︎][60]
14. ibid. [↩︎][61]
15. Dan Libby, “RSS 0.91 Specification (Netscape),” RSS Advisory Board, accessed September 14, 2018, <http://www.rssboard.org/rss-0-9-1-netscape>. [↩︎][62]
16. Dave Winer, “Scripting News: 7/28/1999,” Scripting News, July 28, 1999, accessed September 14, 2018, <http://scripting.com/1999/07/28.html>. [↩︎][63]
17. Oliver Willis, “RSS Aggregators?” June 19, 2000, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/syndication/conversations/topics/173>. [↩︎][64]
18. Dave Winer, “Scripting News: 07/07/2000,” Scripting News, July 07, 2000, accessed September 14, 2018, <http://essaysfromexodus.scripting.com/backissues/2000/06/07/#rss>. [↩︎][65]
19. Dave Winer, “Re: RSS 0.91 Restarted,” June 9, 2000, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/syndication/conversations/topics/132>. [↩︎][66]
20. Leigh Dodds, “RSS Modularization,” XML.com, July 5, 2000, accessed September 14, 2018, <http://www.xml.com/pub/a/2000/07/05/deviant/rss.html>. [↩︎][67]
21. Ian Davis, “Re: [syndication] RSS Modularization Demonstration,” June 28, 2000, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/syndication/conversations/topics/188>. [↩︎][68]
22. “RDF Site Summary (RSS) 1.0,” December 09, 2000, accessed September 14, 2018, <http://web.resource.org/rss/1.0/spec>. [↩︎][69]
23. Dave Winer, “Re: [syndication] Re: Thoughts, Questions, and Issues,” August 16, 2000, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/syndication/conversations/topics/410>. [↩︎][70]
24. Mark Pilgrim, “History of the RSS Fork,” Dive into Mark, September 5, 2002, accessed September 14, 2018, <http://www.diveintomark.link/2002/history-of-the-rss-fork>. [↩︎][71]
25. Dan Brickley, “RSS-Classic, RSS 1.0 and a Historical Debt,” November 7, 2000, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/rss-dev/conversations/topics/1136>. [↩︎][72]
26. Tim OReilly, “Re: Asking Tim,” UserLand, September 20, 2000, accessed September 14, 2018, <http://static.userland.com/userLandDiscussArchive/msg021537.html>. [↩︎][73]
27. Dave Winer, “Re: Asking Tim,” UserLand, September 20, 2000, accessed September 14, 2018, <http://static.userland.com/userLandDiscussArchive/msg021560.html>. [↩︎][74]
28. John Quain, “BASICS; Fine-Tuning Your Filter for Online Information,” The New York Times, 2004, accessed September 14, 2018, <https://www.nytimes.com/2004/06/03/technology/basics-fine-tuning-your-filter-for-online-information.html>. [↩︎][75]
29. John Schwartz, “Aaron Swartz, Internet Activist, Dies at 26,” The New York Times, January 12, 2013, accessed September 14, 2018, <https://www.nytimes.com/2013/01/13/technology/aaron-swartz-internet-activist-dies-at-26.html>. [↩︎][76]
30. “A Second Spring of Cleaning,” Official Google Blog, March 13, 2013, accessed September 14, 2018, <https://googleblog.blogspot.com/2013/03/a-second-spring-of-cleaning.html>. [↩︎][77]
31. Steve Gillmor, “Rest in Peace, RSS,” TechCrunch, May 5, 2009, accessed September 14, 2018, <https://techcrunch.com/2009/05/05/rest-in-peace-rss/>. [↩︎][78]
32. Marco Arment, “Lockdown,” Marco.org, July 3, 2013, accessed September 14, 2018, <https://marco.org/2013/07/03/lockdown>. [↩︎][79]
33. Bob Tedeschi, “Theres a Popular New Code for Deals: RSS,” The New York Times, January 29, 2006, accessed September 14, 2018, <https://www.nytimes.com/2006/01/29/travel/theres-a-popular-new-code-for-deals-rss.html>. [↩︎][80]
34. “NYTimes.com RSS Feeds,” The New York Times, accessed September 14, 2018, <https://web.archive.org/web/20050326065348/www.nytimes.com/services/xml/rss/index.html>. [↩︎][81]
35. Rael Dornfest, “RE: Re: [syndication] RE: RFC: Clearing Confusion for RSS, Agreement for Forward Motion,” May 31, 2001, Syndication Mailing List, accessed September 14, 2018, <https://groups.yahoo.com/neo/groups/syndication/conversations/messages/1717>. [↩︎][82]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/2018/12/18/rss.html
[2]: tmp.F599d8dnXW#fn:3
[3]: tmp.F599d8dnXW#fn:4
[4]: tmp.F599d8dnXW#fn:5
[5]: tmp.F599d8dnXW#fn:6
[6]: https://trends.google.com/trends/explore?date=all&geo=US&q=rss
[7]: tmp.F599d8dnXW#fn:7
[8]: https://twobithistory.org/images/mnn-channel.gif
[9]: tmp.F599d8dnXW#fn:8
[10]: https://twobithistory.org/2018/05/27/semantic-web.html
[11]: tmp.F599d8dnXW#fn:9
[12]: http://web.archive.org/web/19970703020212/http://mcf.research.apple.com:80/hs/screen_shot.html
[13]: tmp.F599d8dnXW#fn:10
[14]: tmp.F599d8dnXW#fn:11
[15]: tmp.F599d8dnXW#fn:12
[16]: http://scripting.com/
[17]: tmp.F599d8dnXW#fn:13
[18]: tmp.F599d8dnXW#fn:14
[19]: tmp.F599d8dnXW#fn:15
[20]: tmp.F599d8dnXW#fn:16
[21]: tmp.F599d8dnXW#fn:17
[22]: tmp.F599d8dnXW#fn:18
[23]: tmp.F599d8dnXW#fn:19
[24]: https://groups.yahoo.com/neo/groups/syndication/info
[25]: tmp.F599d8dnXW#fn:20
[26]: tmp.F599d8dnXW#fn:21
[27]: tmp.F599d8dnXW#fn:22
[28]: tmp.F599d8dnXW#fn:23
[29]: tmp.F599d8dnXW#fn:24
[30]: tmp.F599d8dnXW#fn:25
[31]: tmp.F599d8dnXW#fn:26
[32]: tmp.F599d8dnXW#fn:27
[33]: tmp.F599d8dnXW#fn:28
[34]: tmp.F599d8dnXW#fn:29
[35]: https://tools.ietf.org/html/rfc4287
[36]: tmp.F599d8dnXW#fn:30
[37]: tmp.F599d8dnXW#fn:31
[38]: tmp.F599d8dnXW#fn:32
[39]: tmp.F599d8dnXW#fn:33
[40]: tmp.F599d8dnXW#fn:34
[41]: tmp.F599d8dnXW#fn:35
[42]: tmp.F599d8dnXW#fn:36
[43]: https://twitter.com/mgsiegler/status/311992206716203008
[44]: tmp.F599d8dnXW#fn:37
[45]: https://twitter.com/TwoBitHistory
[46]: https://twobithistory.org/feed.xml
[47]: https://twitter.com/TwoBitHistory/status/1036295112375115778?ref_src=twsrc%5Etfw
[48]: tmp.F599d8dnXW#fnref:3
[49]: tmp.F599d8dnXW#fnref:4
[50]: tmp.F599d8dnXW#fnref:5
[51]: tmp.F599d8dnXW#fnref:6
[52]: tmp.F599d8dnXW#fnref:7
[53]: tmp.F599d8dnXW#fnref:8
[54]: tmp.F599d8dnXW#fnref:9
[55]: tmp.F599d8dnXW#fnref:10
[56]: tmp.F599d8dnXW#fnref:11
[57]: tmp.F599d8dnXW#fnref:12
[58]: tmp.F599d8dnXW#fnref:13
[59]: tmp.F599d8dnXW#fnref:14
[60]: tmp.F599d8dnXW#fnref:15
[61]: tmp.F599d8dnXW#fnref:16
[62]: tmp.F599d8dnXW#fnref:17
[63]: tmp.F599d8dnXW#fnref:18
[64]: tmp.F599d8dnXW#fnref:19
[65]: tmp.F599d8dnXW#fnref:20
[66]: tmp.F599d8dnXW#fnref:21
[67]: tmp.F599d8dnXW#fnref:22
[68]: tmp.F599d8dnXW#fnref:23
[69]: tmp.F599d8dnXW#fnref:24
[70]: tmp.F599d8dnXW#fnref:25
[71]: tmp.F599d8dnXW#fnref:26
[72]: tmp.F599d8dnXW#fnref:27
[73]: tmp.F599d8dnXW#fnref:28
[74]: tmp.F599d8dnXW#fnref:29
[75]: tmp.F599d8dnXW#fnref:30
[76]: tmp.F599d8dnXW#fnref:31
[77]: tmp.F599d8dnXW#fnref:32
[78]: tmp.F599d8dnXW#fnref:33
[79]: tmp.F599d8dnXW#fnref:34
[80]: tmp.F599d8dnXW#fnref:35
[81]: tmp.F599d8dnXW#fnref:36
[82]: tmp.F599d8dnXW#fnref:37

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Research log: gene signatures and connectivity map)
[#]: via: (https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map)
[#]: author: (jtolio.com https://www.jtolio.com/)
Research log: gene signatures and connectivity map
======
Happy Thanksgiving everyone!
### Context
This is the third post in my continuing series on my attempts at research. Previously we talked about:
* [what Im doing, cell states, and microarrays][1]
* and then [more about microarrays and R][2].
By the end of last week we had discussed how to get a table of normalized gene expression intensities that looks like this:
```
ENSG00000280099_at 0.15484421
ENSG00000280109_at 0.16881395
ENSG00000280178_at -0.19621641
ENSG00000280316_at 0.08622216
ENSG00000280401_at 0.15966256
ENSG00000281205_at -0.02085352
...
```
The reason for doing this is to figure out which genes are related, and perhaps more importantly, what a cell is even doing.
_Summary:_ new post, also, Im bringing back the short section summaries.
### Cell lines
The first thing to do when trying to figure out what cells are doing is to choose a cell. Theres all sorts of cells. Healthy brain cells, cancerous blood cells, bruised skin cells, etc.
For any experiment, youll need a control to eliminate noise and apply statistical tests for validity. If you dont use a control, the effect youre seeing may not even exist, and so for any experiment with cells, you will need a control cell.
Cells often divide, which means that a cell, once chosen, will duplicate itself for you in the presence of the appropriate resources. Not all cells divide ad nauseam which provides some challenges, but many cells under study luckily do.
So, a _cell line_ is simply a set of cells that have all replicated from a specific chosen initial cell. Any set of cells from a cell line will be as identical as possible (unless you screwed up! geez). They will be the same type of cell with the same traits and behaviors, at least, as much as possible.
_Summary:_ a cell line is a large amount of cells that are as close to being the same as possible.
### Perturbagens
There are many things that might affect what a cell is doing. Drugs, agitation, temperature, disease, cancer, gene splicing, small molecules (maybe you give a cell more iron or calcium or something), hormones, light, Jello, ennui, etc. Given any particular cell line, giving a cell from that cell line one of these _perturbagens_, or, perturbing the cell in a specific way, when compared to a control will say what that cell does differently in the face of that perturbagen.
If youd like to find out what exactly a certain type of cell does when you give it lemon lime soda, then you choose the right cell line, leave out some control cells and give the rest of the cells soda.
Then, you measure gene expression intensities for both the control cells and the perturbed cells. The _differential expression_ of genes between the perturbed cells and the controls cells is likely due to the introduction of the lemon lime soda.
Genes that end up getting expressed _more_ in the presence of the soda are considered _up-regulated_, whereas genes that end up getting expressed _less_ are considered _down-regulated_. The degree to which a gene is up or down regulated constitutes how much of an effect the soda may have had on that gene.
Of course, all of this has such a significant amount of experimental noise that you could find pretty much anything. Youll need to replicate your experiment independently a few times before you publish that lemon lime soda causes increased expression in the [Sonic hedgehog gene][3].
_Summary:_ A perturbagen is something you introduce/do to a cell to change its behavior, such as drugs or throwing it at a wall or something. The wall perturbagen.
### Gene signature
For a given change or perturbagen to a cell, we now have enough to compute lists of up-regulated and down-regulated genes and the magnitude change in expression for each gene.
This gene expression pattern for some subset of important genes (perhaps the most changed in expression) is called a _gene signature_, and gene signatures are very useful. By comparing signatures, you can:
* identify or compare cell states
* find sets of positively or negatively correlated genes
* find similar disease signatures
* find similar drug signatures
* find drug signatures that might counteract opposite disease signatures.
(That last bullet point is essentially where Im headed with my research.)
_Summary:_ a gene signature is a short summary of the most important gene expression differences a perturbagen causes in a cell.
### Drugs!
The pharmaceutical industry is constantly on the lookout for new breakthrough drugs that might represent huge windfalls in cash, and drugs dont always work as planned. Many drugs spend years in research and development, only to ultimately find poor efficacy or adoption. Sometimes drugs even become known [much more for their side-effects than their originally intended therapy][4].
The practical upshot is that theres countless FDA-approved drugs that represent decades of work that are simply underused or even unused entirely. These drugs have already cleared many challenging regulatory hurdles, but are simply and quite literally cures looking for a disease.
If even just one of these drugs can be given a new lease on life for some yet-to-be-cured disease, then perhaps we can give some people new leases on life!
_Summary:_ instead of developing new drugs, theres already lots of drugs that arent being used. Maybe we can find matching diseases!
### The Connectivity Map project
The [Broad Institutes Connectivity Map project][5] isnt particularly new anymore, but it represents a ground breaking and promising idea - we can dump a bunch of signatures into a database and construct all sorts of new hypotheses we might not even have thought to check before.
To prove out the usefulness of this idea, the Connectivity Map (or cmap) project chose 5 different cell lines (all cancer cells, which are easy to get to replicate!) and a library of FDA approved drugs, and then gave some cells these drugs.
They then constructed a database of all of the signatures they computed for each possible perturbagen they measured. Finally, they constructed a web interface where a user can upload a gene signature and get a result list back of all of the signatures they collected, ordered by the most to least similar. You can totally go sign up and [try it out][5].
This simple tool is surprisingly powerful. It allows you to find similar drugs to a drug you know, but it also allows you to find drugs that might counteract a disease youve created a signature for.
Ultimately, the project led to [a number of successful applications][6]. So useful was it that the Broad Institute has doubled down and created the much larger and more comprehensive [LINCS Project][7] that targets an order of magnitude more cell lines (77) and more perturbagens (42,532, compared to cmaps 6100). You can sign up and use that one too!
_Summary_: building a system that supports querying signature connections has already proved to be super useful.
### Whew
Alright, I wrote most of this on a plane yesterday but since I should now be spending time with family Im going to cut it short here.
Stay tuned for next week!
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
[2]: https://www.jtolio.com/writing/2015/11/research-log-r-and-more-microarrays/
[3]: https://en.wikipedia.org/wiki/Sonic_hedgehog
[4]: https://en.wikipedia.org/wiki/Sildenafil#History
[5]: https://www.broadinstitute.org/cmap/
[6]: https://www.broadinstitute.org/cmap/publications.jsp
[7]: http://www.lincscloud.org/

View File

@ -0,0 +1,443 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Go channels are bad and you should feel bad)
[#]: via: (https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad)
[#]: author: (jtolio.com https://www.jtolio.com/)
Go channels are bad and you should feel bad
======
_Update: If youre coming to this blog post from a compendium titled “Go is not good,” I want to make it clear that I am ashamed to be on such a list. Go is absolutely the least worst programming language Ive ever used. At the time I wrote this, I wanted to curb a trend I was seeing, namely, overuse of one of the more warty parts of Go. I still think channels could be much better, but overall, Go is wonderful. Its like if your favorite toolbox had [this][1] in it; the tool can have uses (even if it could have had more uses), and it can still be your favorite toolbox!_
_Update 2: I would be remiss if I didnt point out this excellent survey of real issues: [Understanding Real-World Concurrency Bugs In Go][2]. A significant finding of this survey is that… Go channels cause lots of bugs._
Ive been using Googles [Go programming language][3] on and off since mid-to-late 2010, and Ive had legitimate product code written in Go for [Space Monkey][4] since January 2012 (before Go 1.0!). My initial experience with Go was back when I was researching Hoares [Communicating Sequential Processes][5] model of concurrency and the [π-calculus][6] under [Matt Might][7]s [UCombinator research group][8] as part of my ([now redirected][9]) PhD work to better enable multicore development. Go was announced right then (how serendipitous!) and I immediately started kicking tires.
It quickly became a core part of Space Monkey development. Our production systems at Space Monkey currently account for over 425k lines of pure Go (_not_ counting all of our vendored libraries, which would make it just shy of 1.5 million lines), so not the most Go youll ever see, but for the relatively young language were heavy users. Weve [written about our Go usage][10] before. Weve open-sourced some fairly heavily used libraries; many people seem to be fans of our [OpenSSL bindings][11] (which are faster than [crypto/tls][12], but please keep openssl itself up-to-date!), our [error handling library][13], [logging library][14], and [metric collection library/zipkin client][15]. We use Go, we love Go, we think its the least bad programming language for our needs weve used so far.
Although I dont think I can talk myself out of mentioning my widely avoided [goroutine-local-storage library][16] here either (which even though its a hack that you shouldnt use, its a beautiful hack), hopefully my other experience will suffice as valid credentials that I kind of know what Im talking about before I explain my deliberately inflamatory post title.
![][17]
### Wait, what?
If you ask the proverbial programmer on the street whats so special about Go, shell most likely tell you that Go is most known for channels and goroutines. Gos theoretical underpinnings are heavily based in Hoares CSP model, which is itself incredibly fascinating and interesting and I firmly believe has much more to yield than weve appropriated so far.
CSP (and the π-calculus) both use communication as the core synchronization primitive, so it makes sense Go would have channels. Rob Pike has been fascinated with CSP (with good reason) for a [considerable][18] [while][19] [now][20].
But from a pragmatic perspective (which Go prides itself on), Go got channels wrong. Channels as implemented are pretty much a solid anti-pattern in my book at this point. Why? Dear reader, let me count the ways.
#### You probably wont end up using just channels.
Hoares Communicating Sequential Processes is a computational model where essentially the only synchronization primitive is sending or receiving on a channel. As soon as you use a mutex, semaphore, or condition variable, bam, youre no longer in pure CSP land. Go programmers often tout this model and philosophy through the chanting of the [cached thought][21] “[share memory by communicating][22].”
So lets try and write a small program using just CSP in Go! Lets make a high score receiver. All we will do is keep track of the largest high score value weve seen. Thats it.
First, well make a `Game` struct.
```
type Game struct {
bestScore int
scores chan int
}
```
`bestScore` isnt going to be protected by a mutex! Thats fine, because well simply have one goroutine manage its state and receive new scores over a channel.
```
func (g *Game) run() {
for score := range g.scores {
if g.bestScore < score {
g.bestScore = score
}
}
}
```
Okay, now well make a helpful constructor to start a game.
```
func NewGame() (g *Game) {
g = &Game{
bestScore: 0,
scores: make(chan int),
}
go g.run()
return g
}
```
Next, lets assume someone has given us a `Player` that can return scores. It might also return an error, cause hey maybe the incoming TCP stream can die or something, or the player quits.
```
type Player interface {
NextScore() (score int, err error)
}
```
To handle the player, well assume all errors are fatal and pass received scores down the channel.
```
func (g *Game) HandlePlayer(p Player) error {
for {
score, err := p.NextScore()
if err != nil {
return err
}
g.scores <- score
}
}
```
Yay! Okay, we have a `Game` type that can keep track of the highest score a `Player` receives in a thread-safe way.
You wrap up your development and youre on your way to having customers. You make this game server public and youre incredibly successful! Lots of games are being created with your game server.
Soon, you discover people sometimes leave your game. Lots of games no longer have any players playing, but nothing stopped the game loop. You are getting overwhelmed by dead `(*Game).run` goroutines.
**Challenge:** fix the goroutine leak above without mutexes or panics. For real, scroll up to the above code and come up with a plan for fixing this problem using just channels.
Ill wait.
For what its worth, it totally can be done with channels only, but observe the simplicity of the following solution which doesnt even have this problem:
```
type Game struct {
mtx sync.Mutex
bestScore int
}
func NewGame() *Game {
return &Game{}
}
func (g *Game) HandlePlayer(p Player) error {
for {
score, err := p.NextScore()
if err != nil {
return err
}
g.mtx.Lock()
if g.bestScore < score {
g.bestScore = score
}
g.mtx.Unlock()
}
}
```
Which one would you rather work on? Dont be deceived into thinking that the channel solution somehow makes this more readable and understandable in more complex cases. Teardown is very hard. This sort of teardown is just a piece of cake with a mutex, but the hardest thing to work out with Go-specific channels only. Also, if anyone replies that channels sending channels is easier to reason about here it will cause me an immediate head-to-desk motion.
Importantly, this particular case might actually be _easily_ solved _with channels_ with some runtime assistance Go doesnt provide! Unfortunately, as it stands, there are simply a surprising amount of problems that are solved better with traditional synchronization primitives than with Gos version of CSP. Well talk about what Go could have done to make this case easier later.
**Exercise:** Still skeptical? Try making both solutions above (channel-only vs. mutex-only) stop asking for scores from `Players` once `bestScore` is 100 or greater. Go ahead and open your text editor. This is a small, toy problem.
The summary here is that you will be using traditional synchronization primitives in addition to channels if you want to do anything real.
#### Channels are slower than implementing it yourself
One of the things I assumed about Go being so heavily based in CSP theory is that there should be some pretty killer scheduler optimizations the runtime can make with channels. Perhaps channels arent always the most straightforward primitive, but surely theyre efficient and fast, right?
![][23]
As [Dustin Hiatt][24] points out on [Tyler Treats post about Go][25],
> Behind the scenes, channels are using locks to serialize access and provide threadsafety. So by using channels to synchronize access to memory, you are, in fact, using locks; locks wrapped in a threadsafe queue. So how do Gos fancy locks compare to just using mutexs from their standard library `sync` package? The following numbers were obtained by using Gos builtin benchmarking functionality to serially call Put on a single set of their respective types.
```
> BenchmarkSimpleSet-8 3000000 391 ns/op
> BenchmarkSimpleChannelSet-8 1000000 1699 ns/o
>
```
Its a similar story with unbuffered channels, or even the same test under contention instead of run serially.
Perhaps the Go scheduler will improve, but in the meantime, good old mutexes and condition variables are very good, efficient, and fast. If you want performance, you use the tried and true methods.
#### Channels dont compose well with other concurrency primitives
Alright, so hopefully I have convinced you that youll at least be interacting with primitives besides channels sometimes. The standard library certainly seems to prefer traditional synchronization primitives over channels.
Well guess what, its actually somewhat challenging to use channels alongside mutexes and condition variables correctly!
One of the interesting things about channels that makes a lot of sense coming from CSP is that channel sends are synchronous. A channel send and channel receive are intended to be synchronization barriers, and the send and receive should happen at the same virtual time. Thats wonderful if youre in well-executed CSP-land.
![][26]
Pragmatically, Go channels also come in a buffered variety. You can allocate a fixed amount of space to account for possible buffering so that sends and receives are disparate events, but the buffer size is capped. Go doesnt provide a way to have arbitrarily sized buffers - you have to allocate the buffer size in advance. _This is fine_, Ive seen people argue on the mailing list, _because memory is bounded anyway._
Wat.
This is a bad answer. Theres all sorts of reasons to use an arbitrarily buffered channel. If we knew everything up front, why even have `malloc`?
Not having arbitrarily buffered channels means that a naive send on _any_ channel could block at any time. You want to send on a channel and update some other bookkeeping under a mutex? Careful! Your channel send might block!
```
// ...
s.mtx.Lock()
// ...
s.ch <- val // might block!
s.mtx.Unlock()
// ...
```
This is a recipe for dining philosopher dinner fights. If you take a lock, you should quickly update state and release it and not do anything blocking under the lock if possible.
There is a way to do a non-blocking send on a channel in Go, but its not the default behavior. Assume we have a channel `ch := make(chan int)` and we want to send the value `1` on it without blocking. Here is the minimum amount of typing you have to do to send without blocking:
```
select {
case ch <- 1: // it sent
default: // it didn't
}
```
This isnt what naturally leaps to mind for beginning Go programmers.
The summary is that because many operations on channels block, it takes careful reasoning about philosophers and their dining to successfully use channel operations alongside and under mutex protection, without causing deadlocks.
#### Callbacks are strictly more powerful and dont require unnecessary goroutines.
![][27]
Whenever an API uses a channel, or whenever I point out that a channel makes something hard, someone invariably points out that I should just spin up a goroutine to read off the channel and make whatever translation or fix I need as it reads of the channel.
Um, no. What if my code is in a hotpath? Theres very few instances that require a channel, and if your API could have been designed with mutexes, semaphores, and callbacks and no additional goroutines (because all event edges are triggered by API events), then using a channel forces me to add another stack of memory allocation to my resource usage. Goroutines are much lighter weight than threads, yes, but lighter weight doesnt mean the lightest weight possible.
As Ive formerly [argued in the comments on an article about using channels][28] (lol the internet), your API can _always_ be more general, _always_ more flexible, and take drastically less resources if you use callbacks instead of channels. “Always” is a scary word, but I mean it here. Theres proof-level stuff going on.
If someone provides a callback-based API to you and you need a channel, you can provide a callback that sends on a channel with little overhead and full flexibility.
If, on the other hand, someone provides a channel-based API to you and you need a callback, you have to spin up a goroutine to read off the channel _and_ you have to hope that no one tries to send more on the channel when youre done reading so you cause blocked goroutine leaks.
For a super simple real-world example, check out the [context interface][29] (which incidentally is an incredibly useful package and what you should be using instead of [goroutine-local storage][16]):
```
type Context interface {
...
// Done returns a channel that closes when this work unit should be canceled.
Done() <-chan struct{}
// Err returns a non-nil error when the Done channel is closed
Err() error
...
}
```
Imagine all you want to do is log the corresponding error when the `Done()` channel fires. What do you have to do? If you dont have a good place youre already selecting on a channel, you have to spin up a goroutine to deal with it:
```
go func() {
<-ctx.Done()
logger.Errorf("canceled: %v", ctx.Err())
}()
```
What if `ctx` gets garbage collected without closing the channel `Done()` returned? Whoops! Just leaked a goroutine!
Now imagine we changed `Done`s signature:
```
// Done calls cb when this work unit should be canceled.
Done(cb func())
```
First off, logging is so easy now. Check it out: `ctx.Done(func() { log.Errorf("canceled: %v", ctx.Err()) })`. But lets say you really do need some select behavior. You can just call it like this:
```
ch := make(chan struct{})
ctx.Done(func() { close(ch) })
```
Voila! No expressiveness lost by using a callback instead. `ch` works like the channel `Done()` used to return, and in the logging case we didnt need to spin up a whole new stack. I got to keep my stack traces (if our log package is inclined to use them); I got to avoid another stack allocation and another goroutine to give to the scheduler.
Next time you use a channel, ask yourself if theres some goroutines you could eliminate if you used mutexes and condition variables instead. If the answer is yes, your code will be more efficient if you change it. And if youre trying to use channels just to be able to use the `range` keyword over a collection, Im going to have to ask you to put your keyboard away or just go back to writing Python books.
![more like Zooey De-channel, amirite][30]
#### The channel API is inconsistent and just cray-cray
Closing or sending on a closed channel panics! Why? If you want to close a channel, you need to either synchronize its closed state externally (with mutexes and so forth that dont compose well!) so that other writers dont write to or close a closed channel, or just charge forward and close or write to closed channels and expect youll have to recover any raised panics.
This is such bizarre behavior. Almost every other operation in Go has a way to avoid a panic (type assertions have the `, ok =` pattern, for example), but with channels you just get to deal with it.
Okay, so when a send will fail, channels panic. I guess that makes some kind of sense. But unlike almost everything else with nil values, sending to a nil channel wont panic. Instead, it will block forever! Thats pretty counter-intuitive. That might be useful behavior, just like having a can-opener attached to your weed-whacker might be useful (and found in Skymall), but its certainly unexpected. Unlike interacting with nil maps (which do implicit pointer dereferences), nil interfaces (implicit pointer dereferences), unchecked type assertions, and all sorts of other things, nil channels exhibit actual channel behavior, as if a brand new channel was just instantiated for this operation.
Receives are slightly nicer. What happens when you receive on a closed channel? Well, that works - you get a zero value. Okay that makes sense I guess. Bonus! Receives allow you to do a `, ok =`-style check if the channel was open when you received your value. Thank heavens we get `, ok =` here.
But what happens if you receive from a nil channel? _Also blocks forever!_ Yay! Dont try and use the fact that your channel is nil to keep track of if you closed it!
### What are channels good for?
Of course channels are good for some things (they are a generic container after all), and there are certain things you can only do with them (`select`).
#### They are another special-cased generic datastructure
Go programmers are so used to arguments about generics that I can feel the PTSD coming on just by bringing up the word. Im not here to talk about it so wipe the sweat off your brow and lets keep moving.
Whatever your opinion of generics is, Gos maps, slices, and channels are data structures that support generic element types, because theyve been special-cased into the language.
In a language that doesnt allow you to write your own generic containers, _anything_ that allows you to better manage collections of things is valuable. Here, channels are a thread-safe datastructure that supports arbitrary value types.
So thats useful! That can save some boilerplate I suppose.
Im having trouble counting this as a win for channels.
#### Select
The main thing you can do with channels is the `select` statement. Here you can wait on a fixed number of inputs for events. Its kind of like epoll, but you have to know upfront how many sockets youre going to be waiting on.
This is truly a useful language feature. Channels would be a complete wash if not for `select`. But holy smokes, let me tell you about the first time you decide you might need to select on multiple things but you dont know how many and you have to use `reflect.Select`.
### How could channels be better?
Its really tough to say what the most tactical thing the Go language team could do for Go 2.0 is (the Go 1.0 compatibility guarantee is good but hand-tying), but that wont stop me from making some suggestions.
#### Select on condition variables!
We could just obviate the need for channels! This is where I propose we get rid of some sacred cows, but let me ask you this, how great would it be if you could select on any custom synchronization primitive? (A: So great.) If we had that, we wouldnt need channels at all.
#### GC could help us?
In the very first example, we could easily solve the high score server cleanup with channels if we were able to use directionally-typed channel garbage collection to help us clean up.
![][31]
As you know, Go has directionally-typed channels. You can have a channel type that only supports reading (`<-chan`) and a channel type that only supports writing (`chan<-`). Great!
Go also has garbage collection. Its clear that certain kinds of book keeping are just too onerous and we shouldnt make the programmer deal with them. We clean up unused memory! Garbage collection is useful and neat.
So why not help clean up unused or deadlocked channel reads? Instead of having `make(chan Whatever)` return one bidirectional channel, have it return two single-direction channels (`chanReader, chanWriter := make(chan Type)`).
Lets reconsider the original example:
```
type Game struct {
bestScore int
scores chan<- int
}
func run(bestScore *int, scores <-chan int) {
// we don't keep a reference to a *Game directly because then we'd be holding
// onto the send side of the channel.
for score := range scores {
if *bestScore < score {
*bestScore = score
}
}
}
func NewGame() (g *Game) {
// this make(chan) return style is a proposal!
scoreReader, scoreWriter := make(chan int)
g = &Game{
bestScore: 0,
scores: scoreWriter,
}
go run(&g.bestScore, scoreReader)
return g
}
func (g *Game) HandlePlayer(p Player) error {
for {
score, err := p.NextScore()
if err != nil {
return err
}
g.scores <- score
}
}
```
If garbage collection closed a channel when we could prove no more values are ever coming down it, this solution is completely fixed. Yes yes, the comment in `run` is indicative of the existence of a rather large gun aimed at your foot, but at least the problem is easily solveable now, whereas it really wasnt before. Furthermore, a smart compiler could probably make appropriate proofs to reduce the damage from said foot-gun.
#### Other smaller issues
* **Dup channels?** \- If we could use an equivalent of the `dup` syscall on channels, then we could also solve the multiple producer problem quite easily. Each producer could close their own `dup`-ed channel without ruining the other producers.
* **Fix the channel API!** \- Close isnt idempotent? Send on closed channel panics with no way to avoid it? Ugh!
* **Arbitrarily buffered channels** \- If we could make buffered channels with no fixed buffer size limit, then we could make channels that dont block.
### What do we tell people about Go then?
If you havent yet, please go take a look at my current favorite programming post: [What Color is Your Function][32]. Without being about Go specifically, this blog post much more eloquently than I could lays out exactly why goroutines are Gos best feature (and incidentally one of the ways Go is better than Rust for some applications).
If youre still writing code in a programming language that forces keywords like `yield` on you to get high performance, concurrency, or an event-driven model, you are living in the past, whether or not you or anyone else knows it. Go is so far one of the best entrants Ive seen of languages that implement an M:N threading model thats not 1:1, and dang thats powerful.
So, tell folks about goroutines.
If I had to pick one other leading feature of Go, its interfaces. Statically-typed [duck typing][33] makes extending and working with your own or someone elses project so fun and amazing its probably worth me writing an entirely different set of words about it some other time.
### So…
I keep seeing people charge in to Go, eager to use channels to their full potential. Heres my advice to you.
**JUST STAHP IT**
When youre writing APIs and interfaces, as bad as the advice “never” can be, Im pretty sure theres never a time where channels are better, and every Go API Ive used that used channels Ive ended up having to fight. Ive never thought “oh good, theres a channel here;” its always instead been some variant of _**WHAT FRESH HELL IS THIS?**_
So, _please, please use channels where appropriate and only where appropriate._
In all of my Go code I work with, I can count on one hand the number of times channels were really the best choice. Sometimes they are. Thats great! Use them then. But otherwise just stop.
![][34]
_Special thanks for the valuable feedback provided by my proof readers Jeff Wendling, [Andrew Harding][35], [George Shank][36], and [Tyler Treat][37]._
If you want to work on Go with us at Space Monkey, please [hit me up][38]!
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://blog.codinghorror.com/content/images/uploads/2012/06/6a0120a85dcdae970b017742d249d5970d-800wi.jpg
[2]: https://songlh.github.io/paper/go-study.pdf
[3]: https://golang.org/
[4]: http://www.spacemonkey.com/
[5]: https://en.wikipedia.org/wiki/Communicating_sequential_processes
[6]: https://en.wikipedia.org/wiki/%CE%A0-calculus
[7]: http://matt.might.net
[8]: http://www.ucombinator.org/
[9]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
[10]: https://www.jtolio.com/writing/2014/04/go-space-monkey/
[11]: https://godoc.org/github.com/spacemonkeygo/openssl
[12]: https://golang.org/pkg/crypto/tls/
[13]: https://godoc.org/github.com/spacemonkeygo/errors
[14]: https://godoc.org/github.com/spacemonkeygo/spacelog
[15]: https://godoc.org/gopkg.in/spacemonkeygo/monitor.v1
[16]: https://github.com/jtolds/gls
[17]: https://www.jtolio.com/images/wat/darth-helmet.jpg
[18]: https://en.wikipedia.org/wiki/Newsqueak
[19]: https://en.wikipedia.org/wiki/Alef_%28programming_language%29
[20]: https://en.wikipedia.org/wiki/Limbo_%28programming_language%29
[21]: https://lesswrong.com/lw/k5/cached_thoughts/
[22]: https://blog.golang.org/share-memory-by-communicating
[23]: https://www.jtolio.com/images/wat/jon-stewart.jpg
[24]: https://twitter.com/HiattDustin
[25]: http://bravenewgeek.com/go-is-unapologetically-flawed-heres-why-we-use-it/
[26]: https://www.jtolio.com/images/wat/obama.jpg
[27]: https://www.jtolio.com/images/wat/yael-grobglas.jpg
[28]: http://www.informit.com/articles/article.aspx?p=2359758#comment-2061767464
[29]: https://godoc.org/golang.org/x/net/context
[30]: https://www.jtolio.com/images/wat/zooey-deschanel.jpg
[31]: https://www.jtolio.com/images/wat/joel-mchale.jpg
[32]: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
[33]: https://en.wikipedia.org/wiki/Duck_typing
[34]: https://www.jtolio.com/images/wat/michael-cera.jpg
[35]: https://github.com/azdagron
[36]: https://twitter.com/taterbase
[37]: http://bravenewgeek.com
[38]: https://www.jtolio.com/contact/

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Magic GOPATH)
[#]: via: (https://www.jtolio.com/2017/01/magic-gopath)
[#]: author: (jtolio.com https://www.jtolio.com/)
Magic GOPATH
======
_**Update:** With the advent of Go 1.11 and [Go modules][1], this whole post is now useless. Unset your GOPATH entirely and switch to Go modules today!_
Maybe someday Ill start writing about things besides Go again.
Go requires that you set an environment variable for your workspace called your `GOPATH`. The `GOPATH` is one of the most confusing aspects of Go to newcomers and even relatively seasoned developers alike. Its not immediately clear what would be better, but finding a good `GOPATH` value has implications for your source code repository layout, how many separate projects you have on your computer, how default project installation instructions work (via `go get`), and even how you interoperate with other projects and libraries.
Its taken until Go 1.8 to decide to [set a default][2] and that small change was one of [the most talked about code reviews][3] for the 1.8 release cycle.
After [writing about GOPATH himself][4], [Dave Cheney][5] [asked me][6] to write a blog post about what I do.
### My proposal
I set my `GOPATH` to always be the current working directory, unless a parent directory is clearly the `GOPATH`.
Heres the relevant part of my `.bashrc`:
```
# bash command to output calculated GOPATH.
calc_gopath() {
local dir="$PWD"
# we're going to walk up from the current directory to the root
while true; do
# if there's a '.gopath' file, use its contents as the GOPATH relative to
# the directory containing it.
if [ -f "$dir/.gopath" ]; then
( cd "$dir";
# allow us to squash this behavior for cases we want to use vgo
if [ "$(cat .gopath)" != "" ]; then
cd "$(cat .gopath)";
echo "$PWD";
fi; )
return
fi
# if there's a 'src' directory, the parent of that directory is now the
# GOPATH
if [ -d "$dir/src" ]; then
echo "$dir"
return
fi
# we can't go further, so bail. we'll make the original PWD the GOPATH.
if [ "$dir" == "/" ]; then
echo "$PWD"
return
fi
# now we'll consider the parent directory
dir="$(dirname "$dir")"
done
}
my_prompt_command() {
export GOPATH="$(calc_gopath)"
# you can have other neat things in here. I also set my PS1 based on git
# state
}
case "$TERM" in
xterm*|rxvt*)
# Bash provides an environment variable called PROMPT_COMMAND. The contents
# of this variable are executed as a regular Bash command just before Bash
# displays a prompt. Let's only set it if we're in some kind of graphical
# terminal I guess.
PROMPT_COMMAND=my_prompt_command
;;
*)
;;
esac
```
The benefits are fantastic. If you want to quickly `go get` something and not have it clutter up your workspace, you can do something like:
```
cd $(mktemp -d) && go get github.com/the/thing
```
On the other hand, if youre jumping between multiple projects (whether or not they have the full workspace checked in or are just library packages), the `GOPATH` is set accurately.
More flexibly, if you have a tree where some parent directory is outside of the `GOPATH` but you want to set the `GOPATH` anyways, you can create a `.gopath` file and it will automatically set your `GOPATH` correctly any time your shell is inside that directory.
The whole thing is super nice. I kinda cant imagine doing something else anymore.
### Fin.
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2017/01/magic-gopath
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more
[2]: https://rakyll.org/default-gopath/
[3]: https://go-review.googlesource.com/32019/
[4]: https://dave.cheney.net/2016/12/20/thinking-about-gopath
[5]: https://dave.cheney.net/
[6]: https://twitter.com/davecheney/status/811334240247812097

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,153 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to not be a white male asshole, by a former offender)
[#]: via: (https://www.jtolio.com/2018/03/how-to-not-be-a-white-male-asshole-by-a-former-offender)
[#]: author: (jtolio.com https://www.jtolio.com/)
How to not be a white male asshole, by a former offender
======
_Huge thanks to Caitlin Jarvis for editing, contributing to, and proofreading to this post._
First off, lets start off with some assumptions. You, dear reader, dont intend to cause anyone harm. You have good intentions, see yourself as a good person, and are interested in self improvement. Thats great!
Second, I dont actually know for sure if Im not still a current offender. I might be! Its certainly something Ill never be done working on.
### 1\. You dont know what others are going through
Unfortunately, your good intentions are not enough to make sure the experiences of others are, in fact, good because we live in a world of asymmetric information. If another persons dog just died unbeknownst to you and you start talking excitedly about how great dogs are to try and cheer a sad person up, you may end up causing them to be even sadder. You know things other people dont, and others know things you dont.
So when I say that if you are a white man, there is an invisible world of experiences happening all around you that you are inherently blind to, its because of asymmetric information. You cant know what others are going through because you are not an impartial observer of a system. _You exist within the system._
![][1]
Let me show you what I mean: did you know a recent survey found that _[81 percent of women have experienced sexual harassment of some kind][2]_? Fully 1 out of every 2 women you know have had to deal specifically with _unwanted sexual touching_.
What should have been most amazing about the [#MeToo movement][3] was not how many women reported harassment, but how many men were surprised.
### 2\. You can inadvertently contribute to a racist, sexist, or prejudiced society
I [previously wrote a lot about how small little interactions can add up][4], illustrating that even if you dont intend to subject someone to racism, sexism, or some other prejudice, you might be doing it anyway. Intentions are meaningless when your actions amplify the negative experience of someone else.
An example from [Maisha Johnson in Everyday Feminism][5]:
> Black women deal with people touching our hair a lot. Now you know. Okay, theres more to it than that: Black women deal with people touching our hair a _hell_ of a lot.
>
> If you approach a Black woman saying “I just have to feel your hair,” its pretty safe to assume this isnt the first time shes heard that.
>
> Everyone who asks me if they can touch follows a long line of people othering me including strangers who touch my hair without asking. The psychological impact of having people constantly feel entitled my personal space has worn me down.
Another example is that men frequently demand proof. Even though it makes sense in general to check your sources for something, the predominant response of men when confronted with claims of sexist treatment is to [ask for evidence][6]. Because this happens so frequently, this action _itself_ contributes to the sexist subjugation of women. The parallel universe women live in is so distinct from the experiences of men that men cant believe their ears, and treat the report of a victim with skepticism.
As you might imagine, this sort of effect is not limited to asking women for evidence or hair touching. Microaggressions are real and everywhere; the accumulation of lots of small things can be enormous.
If youre someone in charge of building things, this can be even more important and an even greater responsibility. If you build an app that is blind to the experiences of people who dont look or act like you, you can significantly amplify negative experiences for others by causing systemic and system-wide issues.
### 3\. The only way to stop contributing is to continually listen to others
If you dont already know what others are going through, and by not knowing what others are going through you may be subjecting them to prejudice even if you dont mean to, what can you do to help others avoid prejudice? You can listen to them! People who are experiencing prejudice _dont want to be experiencing prejudice_ and tend to be vocal about the experience. It is your job to really listen and then turn around and change the way you approach these situations in the future.
### 4\. How do I listen?
To listen to someone, you need to have empathy. You need to actually care about them. You need to process what theyre saying and not treat them with suspicion.
Listening is very different from interjecting and arguing. Listening to others is different from making them do the work to educate you. It is your job to find the experiences of others you havent had and learn from them without demanding a curriculum.
When people say you should just believe marginalized people, [no one is asking you to check your critical thinking at the door][7]. What youre being asked to do is to be aware that your incredulity is a further reminder that you are not experiencing the same thing. Worse - white men acting incredulous is _so unbelievably common_ that it itself is a microaggression. Dont be a sea lion:
![][8]
#### Aside about diversity of experience vs. diversity of thought.
When trying to find others to listen to, who should you find? Recently, a growing number of people have echoed that all thats really required of diversity is different viewpoints, and having diversity of thought is the ultimate goal.
I want to point out that this is not the kind of diversity that will be useful to you. Its easy to have a bunch of different opinions and then reject them when they complicate your life. What you want to be listening to is diversity of _experience_. Some experiences cant be chosen. You can choose to be contrarian, but you cant choose the color of your skin.
### 5\. Where do I listen?
What you need is a way to be a fly on the wall and observe the life experiences of others through their words and perspectives. Being friends and hanging out with people who are different from you is great. Getting out of monocultures is fantastic. Holding your company to diversity and inclusion initiatives is wonderful.
But if you still need more or you live somewhere like Utah?
What if there was a website where people from all walks of life opted in to talking about their day and what theyre feeling and experiencing from their viewpoint in a way you could read? Itd be almost like seeing the world through their eyes.
Yep, this blog post is an unsolicited Twitter ad. Twitter definitely has its share of problems, but after [writing about how I finally figured out Twitter][9], in 2014 I decided to embark on a year-long effort to use Twitter (I wasnt really using it before) to follow mostly women or people of color in my field and just see what the field is like for them on a day to day basis.
Listening to others in this way blew my mind clean open. Suddenly I was aware of this invisible world around me, much of which is still invisible. Now, Im looking for it, and I catch glimpses. I would challenge anyone and everyone to do this. Make sure the content youre consuming is predominantly viewpoints from life experiences you havent had.
If you need a start, here are some links to accounts to fill your Twitter feed up with:
* [200 Women of Color in Tech on Twitter][10]
* [Women Engineers on Twitter][11]
You can also check out [who I follow][12], though I should warn I also follow a lot of political accounts, joke accounts, and my following of someone is not an endorsement.
Its also worth pointing out that no individual can possibly speak for an entire class of people, but if 38 out of 50 women are saying theyre dealing with something, you should listen.
### 6\. Does this work?
Listening to others works, but you dont have to just take my word for it. Here are two specific and recent experience reports of people turning their worldview for the better by listening to others:
* [A professor at the University of New Brunswick][13]
* [A senior design developer at Microsoft][14]
You can see how much of a profound and fast impact this had on me because by early 2015, only a few months into my Twitter experiment, I was worked up enough to write [my unicycle post][4] in response to what I was reading on Twitter.
Having diverse perspectives in a workplace has even been shown to [increase productivity][15] and [increase creativity][16].
### 7\. Dont stop there!
Not everyone is as growth-oriented as you. Just because youre listening now doesnt mean others are hearing the same distribution of experiences.
If this is new to you, its not new to marginalized people. Imagine how tired they must be in trying to convince everyone their experiences are real, valid, and ongoing. Help get the word out! Repeat and retweet what women and minorities say. Give them credit. In meetings at your work, give credit to others for their ideas and amplify their voices.
Did you know that [non-white or female bosses who push diversity are judged negatively by their peers and managers][17] but white male bosses are not? If youre a white male, use your position where others cant.
If you need an example list of things your company can do, [heres a list Susan Fowler wrote after her experience at Uber][18].
Speak up, use your experiences to help others.
### 8\. Am I not prejudiced now?
The asymmetry of experiences we all have means were all inherently prejudiced to some degree and will likely continue to contribute to a prejudiced society. That said, the first step to fixing it is admitting it!
There will always be work to do. You will always need to keep listening, keep learning, and work to improve every day.
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2018/03/how-to-not-be-a-white-male-asshole-by-a-former-offender
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://www.jtolio.com/images/mrmouse.jpg
[2]: https://www.npr.org/sections/thetwo-way/2018/02/21/587671849/a-new-survey-finds-eighty-percent-of-women-have-experienced-sexual-harassment
[3]: https://en.wikipedia.org/wiki/Me_Too_movement
[4]: https://www.jtolio.com/2015/03/what-riding-a-unicycle-can-teach-us-about-microaggressions/
[5]: https://everydayfeminism.com/2015/09/dont-touch-black-womens-hair/
[6]: https://twitter.com/ArielDumas/status/970692180766490630
[7]: https://www.elle.com/culture/career-politics/a13977980/me-too-movement-false-accusations-believe-women/
[8]: https://www.jtolio.com/images/sealion.png
[9]: https://www.jtolio.com/2009/03/i-finally-figured-out-twitter/
[10]: http://peopleofcolorintech.com/articles/a-list-of-200-women-of-color-on-twitter/
[11]: https://github.com/ryanburgess/female-engineers-twitter
[12]: https://twitter.com/jtolds/following
[13]: https://www.theglobeandmail.com/opinion/ill-start-2018-by-recognizing-my-white-privilege/article37472875/
[14]: https://micahgodbolt.com/blog/changing-your-worldview/
[15]: http://edis.ifas.ufl.edu/hr022
[16]: https://faculty.insead.edu/william-maddux/documents/PSPB-learning-paper.pdf
[17]: https://digest.bps.org.uk/2017/07/12/non-white-or-female-bosses-who-push-diversity-are-judged-negatively-by-their-peers-and-managers/
[18]: https://www.susanjfowler.com/blog/2017/5/20/five-things-tech-companies-can-do-better

View File

@ -0,0 +1,215 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Multinomial Logistic Classification)
[#]: via: (https://www.jtolio.com/2018/05/multinomial-logistic-classification)
[#]: author: (jtolio.com https://www.jtolio.com/)
Multinomial Logistic Classification
======
_This article was originally a problem I wrote for a coding competition I hosted, Vivints 2017 Game of Codes (now offline). The goal of this problem was not only to be a fun challenge but also to teach contestants almost everything they needed to know to build a neural network from scratch. I thought it might be neat to revive on my site! If machine learning is still scary sounding and foreign to you, you should feel much more at ease after working through this problem. I left out the details of [back-propagation][1], and a single-layer neural network isnt really a neural network, but in this problem you can learn how to train and run a complete model! Theres lots of maybe scary-looking math but honestly if you can [multiply matrices][2] you should be fine._
In this problem, youre going to build and train a machine learning model… from scratch! Dont be intimidated - it will be much easier than it sounds!
### What is machine learning?
_Machine learning_ is a broad and growing range of topics, but essentially the idea is to teach the computer how to find patterns in large amounts of data, then use those patterns to make predictions. Surprisingly, the techniques that have been developed allow computers to translate languages, drive cars, recognize cats, synthesize voice, understand your music tastes, cure diseases, and even adjust your thermostat!
You might be surprised to learn that since about 2010, the entire artificial intelligence and machine learning community has reorganized around a surprisingly small and common toolbox for all of these problems. So, lets dive in to this toolbox!
### Classification
One of the most fundamental ways of solving problems in machine learning is by recasting problems as _classification_ problems. In other words, if you can describe a problem as data that needs labels, you can use machine learning!
Machine learning will go through a phase of _training_, where data and existing labels are provided to the system. As a motivating example, imagine you have a large collection of photos that either contain hot dogs or dont. Some of your photos have already been labeled if they contain a hot dog or not, but the other photos we want to build a system that will automatically label them “hotdog” or “nothotdog.” During training, we attempt to build a model of what exactly the essence of each label is. In this case, we will run all of our existing labeled photos through the system so it can learn what makes a hot dog a hot dog.
After training, we run the unseen photos through the model and use the model to generate classifications. If you provide a new photo to your hotdog/nothotdog model, your model should be able to tell you if the photo contains a hot dog, assuming your model had a good training data set and was able to capture the core concept of what a hot dog is.
Many different types of problems can be described as classification problems. As an example, perhaps you want to predict which word comes next in a sequence. Given four input words, a classifier can label those four words as “likely the fourth word follows the last three words” or “not likely.” Alternatively, the classification label for three words could be the most likely word to follow those three.
### How I learned to stop worrying and love multinomial logistic classification
Okay, lets do the simplest thing we can think of to take input data and classify it.
Lets imagine our data that we want to classify is a big list of values. If what we have is a 16 by 16 pixel picture, were going to just put all the pixels in one big row so we have 256 pixel values in a row. So well say \\(\mathbf{x}\\) is a vector in 256 dimensions, and each dimension is the pixel value.
We have two labels, “hotdog” and “nothotdog.” Just like any other machine learning system, our system will never be 100% confident with a classification, so we will need to output confidence probabilities. The output of our system will be a two-dimensional vector, \\(\mathbf{p}\\). \\(p_0\\) will represent the probability that the input should be labeled “hotdog” and \\(p_1\\) will represent the probability that the input should be labeled “nothotdog.”
How do we take a vector in 256 (or \\(\dim(\mathbf{x})\\)) dimensions and make something in just 2 (or \\(\dim(\mathbf{p})\\)) dimensions? Why, [matrix multiplication][2] of course! If you have a matrix with 2 rows and 256 columns, multiplying it by a 256-dimensional vector will result in a 2-dimensional one.
Surprisingly, this is actually really close to the final construction of our classifier, but there are two problems:
1. If one of the input \\(\mathbf{x}\\)s is all zeros, the output will have to be zeros. But we need one of the output dimensions to not be zero!
2. Theres nothing guaranteeing the probabilities in the output will be non-negative and all sum to 1.
The first problem is easy, we add a bias vector \\(\mathbf{b}\\), turning our matrix multiplication into a standard linear equation of the form \\(\mathbf{W}\cdot\mathbf{x}+\mathbf{b}=\mathbf{y}\\).
The second problem can be solved by using the [softmax function][3]. For a given vector \\(\mathbf{v}\\), softmax is defined as:
In case the \\(\sum\\) scares you, \\(\sum_{j=0}^{n-1}\\) is basically a math “for loop.” All its saying is that were going to add together everything that comes after it (\\(e^{v_j}\\)) for every \\(j\\) value from 0 to \\(n-1\\).
Softmax is a neat function! The output will be a vector where the largest dimension in the input will be the closest number to 1, no dimensions will be less than zero, and all dimensions sum to 1. Here are some examples:
Unbelievably, these are all the building blocks you need for a linear model! Lets put all the blocks together. If you already have \\(\mathbf{W}\cdot\mathbf{x}+\mathbf{b}=\mathbf{y}\\), your prediction \\(\mathbf{p}\\) can be found as \\(\text{softmax}\left(\mathbf{y}\right)\\). More fully, given an input \\(\mathbf{x}\\) and a trained model \\(\left(\mathbf{W},\mathbf{b}\right)\\), your prediction \\(\mathbf{p}\\) is:
Once again, in this context, \\(p_0\\) is the probability given the model that the input should be labeled “hotdog” and \\(p_1\\) is the probability given the model that the input should be labeled “nothotdog.”
Its kind of amazing that all you need for good success with things even as complex as handwriting recognition is a linear model such as this one.
### Scoring
How do we find \\(\mathbf{W}\\) and \\(\mathbf{b}\\)? It might surprise you but were going to start off by guessing some random numbers and then changing them until we arent predicting things too badly (via a process known as [gradient descent][4]). But what does “too badly” mean?
Recall that we have data that weve already labeled. We already have photos labeled “hotdog” and “nothotdog” in whats called our _training set_. For each photo, were going to take whatever our current model is (\\(\mathbf{W}\\) and \\(\mathbf{b}\\)) and find \\(\mathbf{p}\\). Perhaps for one photo (that really is of a hot dog) our \\(\mathbf{p}\\) looks like this:
This isnt great! Our model says that the photo should be labeled “nothotdog” with 60% probability, but it is a hot dog.
We need a bit more terminology. So far, weve only talked about one sample, one label, and one prediction at a time, but obviously we have lots of samples, lots of labels, and lots of predictions, and we want to score how our model does not just on one sample, but on all of our training samples. Assume we have \\(s\\) training samples, each sample has \\(d\\) dimensions, and there are \\(l\\) labels. In the case of our 16 by 16 pixel hot dog photos, \\(d = 256\\) and \\(l = 2\\). Well refer to sample \\(i\\) as \\(\mathbf{x}^{(i)}\\), our prediction for sample \\(i\\) as \\(\mathbf{p}^{(i)}\\), and the correct label vector for sample \\(i\\) as \\(\mathbf{L}^{(i)}\\). \\(\mathbf{L}^{(i)}\\) is a vector that is all zeros except for the dimension corresponding to the correct label, where that dimension is a 1. In other words, we have \\(\mathbf{W}\cdot\mathbf{x}^{(i)}+\mathbf{b} = \mathbf{p}^{(i)}\\) and we want \\(\mathbf{p}^{(i)}\\) to be as close to \\(\mathbf{L}^{(i)}\\) as possible, for all \\(s\\) samples.
To score our model, were going to compute something called the _average cross entropy loss_. In general, [loss][5] is used to mean how off the mark a machine learning model is. While there are many ways of calculating loss, were going to use average [cross entropy][6] because it has some nice properties.
Heres the definition of the average cross entropy loss across all samples:
All we need to do is find \\(\mathbf{W}\\) and \\(\mathbf{b}\\) that make this loss smallest. How do we do that?
### Training
As we said before, we will start \\(\mathbf{W}\\) and \\(\mathbf{b}\\) off with random values. For each value, choose a floating-point random number between -1 and 1.
Of course, well need to correct these values given the training data, and we now have enough information to describe how we will back-propagate corrections.
The plan is to process all of the training data enough times that the loss drops to an “acceptable level.” Each time through the training data well collect all of the predictions, and at the end well update \\(\mathbf{W}\\) and \\(\mathbf{b}\\) with the information weve found.
One problem that can occur is that your model might overcorrect after each run. A simple way to limit overcorrection some is to add a “learning rate”, usually designated \\(\alpha\\), which is some small fraction. You get to choose the learning rate! A good default choice for \\(\alpha\\) is 0.1.
At the end of each run through all of the training data, heres how you update \\(\mathbf{W}\\) and \\(\mathbf{b}\\):
Just because this syntax is starting to get out of hand, lets refresh what each symbol means.
* \\(W_{m,n}\\) is the cell in weight matrix \\(\mathbf{W}\\) at row \\(m\\) and column \\(n\\).
* \\(b_m\\) is the \\(m\\)-th dimension in the “bias” vector \\(\mathbf{b}\\).
* \\(\alpha\\) is again your learning rate, 0.1, and \\(s\\) is how many training samples you have.
* \\(x_n^{(i)}\\) is the \\(n\\)-th dimension of sample \\(i\\).
* Likewise, \\(p_m^{(i)}\\) and \\(L_m^{(i)}\\) are the \\(m\\)-th dimensions of our prediction and true labels for sample \\(i\\), respectively. Remember that for each sample \\(i\\), \\(L_m^{(i)}\\) is zero for all but the dimension corresponding to the correct label, where it is 1.
If youre curious how we got these equations, we applied the [chain rule][7] to calculate partial derivatives of the total loss. Its hairy, and this problem description is already too long!
Anyway, once youve updated your \\(\mathbf{W}\\) and \\(\mathbf{b}\\), you start the whole process over!
### When do we stop?
Knowing when to stop is a hard problem. How low your loss goes is a function of your learning rate, how many iterations you run over your training data, and a huge number of other factors. On the flip side, if you train your model so your loss is too low, you run the risk of overfitting your model to your training data, so it wont work well on data it hasnt seen before.
One of the more common ways of deciding when to [stop training][8] is to have a separate validation set of samples we check our success on and stop when we stop improving. But for this problem, to keep things simple what were going to do is just keep track of how our loss changes and stop when the loss stops changing as much.
After the first 10 iterations, your loss will have changed 9 times (there was no change from the first time since it was the first time). Take the average of those 9 changes and stop training when your loss change is less than a hundredth the average loss change.
### Tie it all together
Alright! If youve stuck with me this far, youve learned to implement a multinomial logistic classifier using gradient descent, [back-propagation][1], and [one-hot encoding][9]. Good job!
You should now be able to write a program that takes labeled training samples, trains a model, then takes unlabeled test samples and predicts labels for them!
### Your program
As input your program should take vectors of floating-point values, followed by a label. Some of the labels will be question marks. Your program should output the correct label for all of the question marks it sees. The label your program should output will always be one it has seen training examples of.
Your program will pass the tests if it labels 75% or more of the unlabeled data correctly.
### Where to learn more
If you want to learn more or dive deeper into optimizing your solution, you may be interested in the first section of [Udacitys free course on Deep Learning][10], or [Dom Lumas tutorial on building a mini-TensorFlow][11].
### Example
#### Input
```
0.93 -1.52 1.32 0.05 1.72 horse
1.57 -1.74 0.92 -1.33 -0.68 staple
0.18 1.24 -1.53 1.53 0.78 other
1.96 -1.29 -1.50 -0.19 1.47 staple
1.24 0.15 0.73 -0.22 1.15 battery
1.41 -1.56 1.04 1.09 0.66 horse
-0.70 -0.93 -0.18 0.75 0.88 horse
1.12 -1.45 -1.26 -0.43 -0.05 staple
1.89 0.21 -1.45 0.47 0.62 other
-0.60 -1.87 0.82 -0.66 1.86 staple
-0.80 -1.99 1.74 0.65 1.46 horse
-0.03 1.35 0.11 -0.92 -0.04 battery
-0.24 -0.03 0.58 1.32 -1.51 horse
-0.60 -0.70 1.61 0.56 -0.66 horse
1.29 -0.39 -1.57 -0.45 1.63 staple
0.87 1.59 -1.61 -1.79 1.47 battery
1.86 1.92 0.83 -0.34 1.06 battery
-1.09 -0.81 1.47 1.82 0.06 horse
-0.99 -1.00 -1.45 -1.02 -1.06 staple
-0.82 -0.56 0.82 0.79 -1.02 horse
-1.86 0.77 -0.58 0.82 -1.94 other
0.15 1.18 -0.87 0.78 2.00 other
1.18 0.79 1.08 -1.65 -0.73 battery
0.37 1.78 0.01 0.06 -0.50 other
-0.35 0.31 1.18 -1.83 -0.57 battery
0.91 1.14 -1.85 0.39 0.07 other
-1.61 0.28 -0.31 0.93 0.77 other
-0.11 -1.75 -1.66 -1.55 -0.79 staple
0.05 1.03 -0.23 1.49 1.66 other
-1.99 0.43 -0.99 1.72 0.52 other
-0.30 0.40 -0.70 0.51 0.07 other
-0.54 1.92 -1.13 -1.53 1.73 battery
-0.52 0.44 -0.84 -0.11 0.10 battery
-1.00 -1.82 -1.19 -0.67 -1.18 staple
-1.81 0.10 -1.64 -1.47 -1.86 battery
-1.77 0.53 -1.28 0.55 -1.15 other
0.29 -0.28 -0.41 0.70 1.80 horse
-0.91 0.02 1.60 -1.44 -1.89 battery
1.24 -0.42 -1.30 -0.80 -0.54 staple
-1.98 -1.15 0.54 -0.14 -1.24 staple
1.26 -1.02 -1.08 -1.27 1.65 ?
1.97 1.14 0.51 0.96 -0.36 ?
0.99 0.14 -0.97 -1.90 -0.87 ?
1.54 -1.83 1.59 1.98 -0.41 ?
-1.81 0.34 -0.83 0.90 -1.60 ?
```
#### Output
```
staple
other
battery
horse
other
```
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2018/05/multinomial-logistic-classification
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Backpropagation
[2]: https://en.wikipedia.org/wiki/Matrix_multiplication
[3]: https://en.wikipedia.org/wiki/Softmax_function
[4]: https://en.wikipedia.org/wiki/Gradient_descent
[5]: https://en.wikipedia.org/wiki/Loss_function
[6]: https://en.wikipedia.org/wiki/Cross_entropy
[7]: https://en.wikipedia.org/wiki/Chain_rule
[8]: https://en.wikipedia.org/wiki/Early_stopping
[9]: https://en.wikipedia.org/wiki/One-hot
[10]: https://classroom.udacity.com/courses/ud730
[11]: https://nbviewer.jupyter.org/github/domluna/labs/blob/master/Build%20Your%20Own%20TensorFlow.ipynb

View File

@ -0,0 +1,151 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Some notes on running new software in production)
[#]: via: (https://jvns.ca/blog/2018/11/11/understand-the-software-you-use-in-production/)
[#]: author: (Julia Evans https://jvns.ca/)
Some notes on running new software in production
======
Im working on a talk for kubecon in December! One of the points I want to get across is the amount of time/investment it takes to use new software in production without causing really serious incidents, and what thats looked like for us in our use of Kubernetes.
To start out, this post isnt blanket advice. There are lots of times when its totally fine to just use software and not worry about **how** it works exactly. So lets start by talking about when its important to invest.
### when it matters: 99.99%
If youre running a service with a low SLO like 99% I dont think it matters that much to understand the software you run in production. You can be down for like 2 hours a month! If something goes wrong, just fix it and its fine.
At 99.99%, its different. Thats 45 minutes / year of downtime, and if you find out about a serious issue for the first time in production it could easily take you 20 minutes or to revert the change. Thats half your uptime budget for the year!
### when it matters: software that youre using heavily
Also, even if youre running a service with a 99.99% SLO, its impossible to develop a super deep understanding of every single piece of software youre using. For example, a web service might use:
* 100 library dependencies
* the filesystem (so theres linux filesystem code!)
* the network (linux networking code!)
* a database (like postgres)
* a proxy (like nginx/haproxy)
If youre only reading like 2 files from disk, you dont need to do a super deep dive into Linux filesystems internals, you can just read the file from disk.
What I try to do in practice is identify the components which we rely on the (or have the most unusual use cases for!), and invest time into understanding those. These are usually pretty easy to identify because theyre the ones which will cause the most problems :)
### when it matters: new software
Understanding your software especially matters for newer/less mature software projects, because its morely likely to have bugs &amp; or just not have matured enough to be used by most people without having to worry. Ive spent a bunch of time recently with Kubernetes/Envoy which are both relatively new projects, and neither of those are remotely in the category of “oh, itll just work, dont worry about it”. Ive spent many hours debugging weird surprising edge cases with both of them and learning how to configure them in the right way.
### a playbook for understanding your software
The playbook for understanding the software you run in production is pretty simple. Here it is:
1. Start using it in production in a non-critical capacity (by sending a small percentage of traffic to it, on a less critical service, etc)
2. Let that bake for a few weeks.
3. Run into problems.
4. Fix the problems. Go to step 3.
Repeat until you feel like you have a good handle on this softwares failure modes and are comfortable running it in a more critical capacity. Lets talk about that in a little more detail, though:
### what running into bugs looks like
For example, Ive been spending a lot of time with Envoy in the last year. Some of the issues weve seen along the way are: (in no particular order)
* One of the default settings resulted in retry &amp; timeout headers not being respected
* Envoy (as a client) doesnt support TLS session resumption, so servers with a large amount of Envoy clients get DDOSed by TLS handshakes
* Envoys active healthchecking means that you services get healthchecked by every client. This is mostly okay but (again) services with many clients can get overwhelmed by it.
* Having every client independently healthcheck every server interacts somewhat poorly with services which are under heavy load, and can exacerbate performance issues by removing up-but-slow clients from the load balancer rotation.
* Envoy doesnt retry failed connections by default
* it frequently segfaults when given incorrect configuration
* various issues with it segfaulting because of resource leaks / memory safety issues
* hosts running out of disk space between we didnt rotate Envoy log files often enough
A lot of these arent bugs theyre just cases where what we expected the default configuration to do one thing, and it did another thing. This happens all the time, and it can result in really serious incidents. Figuring out how to configure a complicated piece of software appropriately takes a lot of time, and you just have to account for that.
And Envoy is great software! The maintainers are incredibly responsive, they fix bugs quickly and its performance is good. Its overall been quite stable and its done well in production. But just because something is great software doesnt mean you wont also run into 10 or 20 relatively serious issues along the way that need to be addressed in one way or another. And its helpful to understand those issues **before** putting the software in a really critical place.
### try to have each incident only once
My view is that running new software in production inevitably results in incidents. The trick:
1. Make sure the incidents arent too serious (by making production a less critical system first)
2. Whenever theres an incident (even if its not that serious!!!), spend the time necessary to understand exactly why it happened and how to make sure it doesnt happen again
My experience so far has been that its actually relatively possible to pull off “have every incident only once”. When we investigate issues and implement remediations, usually that issue **never comes back**. The remediation can either be:
* a configuration change
* reporting a bug upstream and either fixing it ourselves or waiting for a fix
* a workaround (“this software doesnt work with 10,000 clients? ok, we just wont use it with in cases where there are that many clients for now!“, “oh, a memory leak? lets just restart it every hour”)
Knowledge-sharing is really important here too its always unfortunate when one person finds an incident in production, fixes it, but doesnt explain the issue to the rest of the team so somebody else ends up causing the same incident again later because they didnt hear about the original incident.
### Understand what is ok to break and isnt
Another huge part of understanding the software I run in production is understanding which parts are OK to break (aka “if this breaks, it wont result in a production incident”) and which arent. This lets me **focus**: I can put big boxes around some components and decide “ok, if this breaks it doesnt matter, so I wont pay super close attention to it”.
For example, with Kubernetes:
ok to break:
* any stateless control plane component can crash or be cycled out or go down for 5 minutes at any time. If we had 95% uptime for the kubernetes control plane that would probably be fine, it just needs to be working most of the time.
* kubernetes networking (the system where you give every pod an IP addresses) can break as much as it wants because we decided not to use it to start
not ok:
* for us, if etcd goes down for 10 minutes, thats ok. If it goes down for 2 hours, its not
* containers not starting or crashing on startup (iam issues, docker not starting containers, bugs in the scheduler, bugs in other controllers) is serious and needs to be looked at immediately
* containers not having access to the resources they need (because of permissions issues, etc)
* pods being terminated unexpectedly by Kubernetes (if you configure kubernetes wrong it can terminate your pods!)
with Envoy, the breakdown is pretty different:
ok to break:
* if the envoy control plane goes down for 5 minutes, thats fine (itll keep working with stale data)
* segfaults on startup due to configuration errors are sort of okay because they manifest so early and theyre unlikely to surprise us (if the segfault doesnt happen the 1st time, it shouldnt happen the 200th time)
not ok:
* Envoy crashes / segfaults are not good if it crashes, network connections dont happen
* if the control server serves incorrect or incomplete data thats extremely dangerous and can result in serious production incidents. (so downtime is fine, but serving incorrect data is not!)
Neither of these lists are complete at all, but theyre examples of what I mean by “understand your sofware”.
### sharing ok to break / not ok lists is useful
I think these “ok to break” / “not ok” lists are really useful to share, because even if theyre not 100% the same for every user, the lessons are pretty hard won. Id be curious to hear about your breakdown of what kinds of failures are ok / not ok for software youre using!
Figuring out all the failure modes of a new piece of software and how they apply to your situation can take months. (this is is why when you ask your database team “hey can we just use NEW DATABASE” they look at you in such a pained way). So anything we can do to help other people learn faster is amazing
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/11/11/understand-the-software-you-use-in-production/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An example of how C++ destructors are useful in Envoy)
[#]: via: (https://jvns.ca/blog/2018/11/18/c---destructors---really-useful/)
[#]: author: (Julia Evans https://jvns.ca/)
An example of how C++ destructors are useful in Envoy
======
For a while now Ive been working with a C++ project (Envoy), and sometimes I need to contribute to it, so my C++ skills have gone from “nonexistent” to “really minimal”. Ive learned what an initializer list is and that a method starting with `~` is a destructor. I almost know what an lvalue and an rvalue are but not quite.
But the other day when writing some C++ code I figured out something exciting about how to use destructors that I hadnt realized! (the tl;dr of this post for people who know C++ is “julia finally understands what RAII is and that it is useful” :))
### whats a destructor?
C++ has objects. When an C++ object goes out of scope, the compiler inserts a call to its destructor. So if you have some code like
```
function do_thing() {
Thing x{}; // this calls the Thing constructor
return 2;
}
```
there will be a call to xs destructor at the end of the `do_thing` function. so the code c++ generates looks something like:
* make new thing
* call the new things destructor
* return 2
Obviously destructors are way more complicated like this. They need to get called when there are exceptions! And sometimes they get called manually. And for lots of other reasons too. But there are 10 million things to know about C++ and that is not what were doing today, we are just talking about one thing.
### what happens in a destructor?
A lot of the time memory gets freed, which is how you avoid having memory leaks. But thats not what were talking about in this post! We are talking about something more interesting.
### the thing were interested in: Envoy circuit breakers
So Ive been working with Envoy a lot. 3 second Envoy refresher: its a HTTP proxy, your application makes requests to Envoy, which then proxies the request to the servers the application wants to talk to.
One very useful feature Envoy has is this thing called “circuit breakers”. Basically the idea with is that if your application makes 50 billion connections to a service, that will probably overwhelm the service. So Envoy keeps track how many TCP connections youve made to a service, and will stop you from making new requests if you hit the limit. The default `max_connection` limit
### how do you track connection count?
To maintain a circuit breaker on the number of TCP connections, that means you need to keep an accurate count of how many TCP connections are currently open! How do you do that? Well, the way it works is to maintain a `connections` counter and:
* every time a connection is opened, increment the counter
* every time a connection is destroyed (because of a reset / timeout / whatever), decrement the counter
* when creating a new connection, check that the `connections` counter is not over the limit
thats all! And incrementing the counter when creating a new connection is pretty easy. But how do you make sure that the counter gets _decremented_ wheh the connection is destroyed? Connections can be destroyed in a lot of ways (they can time out! they can be closed by Envoy! they can be closed by the server! maybe something else I havent thought of could happen!) and it seems very easy to accidentally miss a way of closing them.
### destructors to the rescue
The way Envoy solves this problem is to create a connection object (called `ActiveClient` in the HTTP connection pool) for every connection.
Then it:
* increments the counter in the constructor ([code][1])
* decrements the counter in the destructor ([code][2])
* checks the counter when a new connection is created ([code][3])
The beauty of this is that now you dont need to make sure that the counter gets decremented in all the right places, you now just need to organize your code so that the `ActiveClient` objects destructor gets called when the connection has closed.
Where does the `ActiveClient` destructor get called in Envoy? Well, Envoy maintains 2 lists of clients (`ready_clients` and `busy_clients`), and when a connection gets closed, Envoy removes the client from those lists. And when it does that, it doesnt need to do any extra cleanup!! In C++, anytime a object is removed from a list, its destructor is called. So `client.removeFromList(ready_clients_);` takes care of all the cleanup. And theres no chance of forgetting to decrement the counter!! It will definitely always happen unless you accidentally leave the object on one of these lists, which would be a bug anyway because the connection is closed :)
### RAII
This pattern Envoy is using here is an extremely common C++ programming pattern called “resource acquisition is initialization”. I find that name very confusing but thats what its called. basically the way it works is:
* identify a resource (like “connection”) where a lot of things need to happen when the connection is initialized / finished
* make a class for that connection
* put all the initialization / finishing code in the constructor / destructor
* make sure the objects destructor method gets called when appropriate! (by removing it from a vector / having it go out of scope)
Previously I knew about using this pattern for kind of obvious things (make sure all the memory gets freed in the destructor, or make sure file descriptors get closed). But I didnt realize it was also useful for cases that are slightly less obviously a resource like “decrement a counter”.
The reason this pattern works is because the C++ compiler/standard library does a bunch of work to make sure that destructors get called when youre done with an object the compiler inserts destructor calls at the end of each block of code, after exceptions, and many standard library collections make sure destructors are called when you remove an object from a collection.
### RAII gives you prompt, deterministic, and hard-to-screw-up cleanup of resources
The exciting thing here is that this programming pattern gives you a way to schedule cleaning up resources thats:
* easy to ensure always happens (when the object goes away, it always happens, even if there was an exception!)
* prompt &amp; determinstic (it happens right away and its guaranteed to happen!)
### what languages have RAII?
C++ and Rust have RAII. Probably other languages too. Java, Python, Go, and garbage collected languages in general do not. In a garbage collected language you can often set up destructors to be run when the object is GCd. But often (like in this case, which the connection count) you want things to be cleaned up **right away** when the object is no longer in use, not some indeterminate period later whenever GC happens to run.
Python context managers are a related idea, you could do something like:
```
with conn_pool.connection() as conn:
do stuff
```
### thats all for now!
Hopefully this explanation of RAII is interesting and mostly correct. Thanks to Kamal for clarifying some RAII things for me!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/11/18/c---destructors---really-useful/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L301
[2]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L315
[3]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L97

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How do you document a tech project with comics?)
[#]: via: (https://jvns.ca/blog/2018/12/09/how-do-you-document-a-tech-project-with-comics/)
[#]: author: (Julia Evans https://jvns.ca/)
How do you document a tech project with comics?
======
Every so often I get email from people saying basically “hey julia! we have an open source project! wed like to use comics / zines / art to document our project! Can we hire you?“.
spoiler: the answer is “no, you cant hire me” I dont do commissions. But I do think this is a cool idea and Ive often wished I had something more useful to say to people than “no”, so if youre interested in this, here are some ideas about how to accomplish it!
### zine != drawing
First, a terminology distinction. One weird thing Ive noticed is that people frequently refer to individual tech drawings as “zines”. I think this is due to me communicating poorly somehow, but drawings are not zines! A zine is a **printed booklet**, like a small maga**zine**. You wouldnt call a photo of a model in Vogue a magazine! The magazine has like a million pages! An individual drawing is a drawing/comic/graphic/whatever. Just clarifying this because I think it causes a bit of unnecessary confusion.
### comics without good information are useless
Usually when folks ask me “hey, could we make a comic explaining X”, it doesnt seem like they have a clear idea of what information exactly they want to get across, they just have a vague idea that maybe it would be cool to draw some comics. This makes sense figuring out what information would be useful to tell people is very hard!! Its 80% of what I spend my time on when making comics.
You should think about comics the same way as any kind of documentation start with the information you want to convey, who your target audience is, and how you want to distribute it (twitter? on your website? in person?), and figure out how to illustrate it after :). The information is the main thing, not the art!
Once you have a clear story about what you want to get across, you can start trying to think about how to represent it using illustrations!
### focus on concepts that dont change
Drawing comics is a much bigger investment than writing documentation (it takes me like 5x longer to convey the same information in a comic than in writing). So use it wisely! Because its not that easy to edit, if youre going to make something a comic you want to focus on concepts that are very unlikely to change. So talk about the core ideas in your project instead of the exact command line arguments it takes!
Here are a couple of options for how you could use comics/illustrations to document your project!
### option 1: a single graphic
One format you might want to try is a single, small graphic explaining what your project is about and why folks might be interested in it. For example: [this zulip comic][1]
This is a short thing, you could post it on Twitter or print it as a pamphlet to give out. The information content here would probably be basically whats on your project homepage, but presented in a more fun/exciting way :)
You can put a pretty small amount of information in a single comic. With that Zulip comic, the things I picked out were:
* zulip is sort of like slack, but it has threads
* its easy to keep track of threads even if the conversation takes place over several days
* you can much more easily selectively catch up with Zulip
* zulip is open source
* theres an open zulip server you can try out
Thats not a lot of information! Its 50 words :). So to do this effectively you need to distill your project down to 50 words in a way thats still useful. Its not easy!
### option 2: many comics
Another approach you can take is to make a more in depth comic / illustration, like [googles guide to kubernetes][2] or [the childrens illustrated guide to kubernetes][3].
To do this, you need a much stronger concept than “uh, I want to explain our project” you want to have a clear target audience in mind! For example, if I were drawing a set of Docker comics, Id probably focus on folks who want to use Docker in production. so Id want to discuss:
* publishing your containers to a public/private registry
* some best practices for tagging your containers
* how to make sure your hosts dont run out of disk space from downloading too many containers
* how to use layers to save on disk space / download less stuff
* whether its reasonable to run the same containers in production &amp; in dev
Thats totally different from the set of comics Id write for folks who just want to use Docker to develop locally!
### option 3: a printed zine
The main thing that differentiates this from “many comics” is that zines are printed! Because of that, for this to make sense you need to have a place to give out the printed copies! Maybe youre going present your project at a major conference? Maybe you give workshops about your project and want to give our the zine to folks in the workshop as notes? Maybe you want to mail it to people?
### how to hire someone to help you
There are basically 3 ways to hire someone:
1. Hire someone who both understands (or can quickly learn) the technology you want to document and can illustrate well. These folks are tricky to find and probably expensive (I certainly wouldnt do a project like this for less than $10,000 even if I did do commissions), just because programmers can usually charge a pretty high consulting rate. Id guess that the main failure mode here is that it might be impossible/very hard to find someone, and it might be expensive.
2. Collaborate with an illustrator to draw it for you. The main failure mode here is that if you dont give the illustrator clear explanations of your tech to work with, you.. wont end up with a clear and useful explanation. From what Ive seen, **most folks underinvest in writing clear explanations for their illustrators** Ive seen a few really adorable tech comics that I dont find useful or clear at all. Id love to see more people do a better job of this. Whats the point of having an adorable illustration if it doesnt teach anyone anything? :)
3. Draw it yourself :). This is what I do, obviously. stick figures are okay!
Most people seem to use method #2 Im not actually aware of any tech folks who have done commissioned comics (though Im sure its happened!). I think method #2 is a great option and Id love to see more folks do it. Paying illustrators is really fun!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/12/09/how-do-you-document-a-tech-project-with-comics/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/986444234365521920
[2]: https://cloud.google.com/kubernetes-engine/kubernetes-comic/
[3]: https://thenewstack.io/kubernetes-gets-childrens-book/

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New talk: High Reliability Infrastructure Migrations)
[#]: via: (https://jvns.ca/blog/2018/12/15/new-talk--high-reliability-infrastructure-migrations/)
[#]: author: (Julia Evans https://jvns.ca/)
New talk: High Reliability Infrastructure Migrations
======
On Tuesday I gave a talk at KubeCon called [High Reliability Infrastructure Migrations][1]. The abstract was:
> For companies with high availability requirements (99.99% uptime or higher), running new software in production comes with a lot of risks. But its possible to make significant infrastructure changes while maintaining the availability your customers expect! Ill give you a toolbox for derisking migrations and making infrastructure changes with confidence, with examples from our Kubernetes &amp; Envoy experience at Stripe.
### video
#### slides
Here are the slides:
since everyone always asks, I drew them in the Notability app on an iPad. I do this because its faster than trying to use regular slides software and I can make better slides.
### a few notes
Here are a few links &amp; notes about things I mentioned in the talk
#### skycfg: write functions, not YAML
I talked about how my team is working on non-YAML interfaces for configuring Kubernetes. The demo is at [skycfg.fun][2], and its [on GitHub here][3]. Its based on [Starlark][4], a configuration language thats a subset of Python.
My coworker [John][5] has promised that hell write a blog post about it at some point, and Im hoping thats coming soon :)
#### no haunted forests
I mentioned a deploy system rewrite we did. John has a great blog post about when rewrites are a good idea and how he approached that rewrite called [no haunted forests][6].
#### ignore most kubernetes ecosystem software
One small point that I made in the talk was that on my team we ignore almost all software in the Kubernetes ecosystem so that we can focus on a few core pieces (Kubernetes &amp; Envoy, plus some small things like kiam). I wanted to mention this because I think often in Kubernetes land it can seem like everyone is using Cool New Things (helm! istio! knative! eep!). Im sure those projects are great but I find it much simpler to stay focused on the basics and I wanted people to know that its okay to do that if thats what works for your company.
I think the reality is that actually a lot of folks are still trying to work out how to use this new software in a reliable and secure way.
#### other talks
I havent watched other Kubecon talks yet, but here are 2 links:
I heard good things about [this keynote from melanie cebula about kubernetes at airbnb][7], and Im excited to see [this talk about kubernetes security][8]. The [slides from that security talk look useful][9]
Also Im very excited to see Kelsey Hightowers keynote as always, but that recording isnt up yet. If you have other Kubecon talks to recommend Id love to know what they are.
#### my first work talk Im happy with
I usually give talks about debugging tools, or side projects, or how I approach my job at a high level not on the actual work that I do at my job. What I talked about in this talk is basically what Ive been learning how to do at work for the last ~2 years. Figuring out how to make big infrastructure changes safely took me a long time (and Im not done!), and so I hope this talk helps other folks do the same thing.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/12/15/new-talk--high-reliability-infrastructure-migrations/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/watch?v=obB2IvCv-K0
[2]: http://skycfg.fun
[3]: https://github.com/stripe/skycfg
[4]: https://github.com/bazelbuild/starlark
[5]: https://john-millikin.com/
[6]: https://john-millikin.com/sre-school/no-haunted-forests
[7]: https://www.youtube.com/watch?v=ytu3aUCwlSg&index=127&t=0s&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU
[8]: https://www.youtube.com/watch?v=a03te8xEjUg&index=65&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU&t=0s
[9]: https://schd.ws/hosted_files/kccna18/1c/KubeCon%20NA%20-%20This%20year%2C%20it%27s%20about%20security%20-%2020181211.pdf

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (2018: Year in review)
[#]: via: (https://jvns.ca/blog/2018/12/23/2018--year-in-review/)
[#]: author: (Julia Evans https://jvns.ca/)
2018: Year in review
======
I wrote these in [2015][1] and [2016][2] and [2017][3] and its always interesting to look back at them, so heres a summary of what went on in my side projects in 2018.
### ruby profiler!
At the beginning of this year I wrote [rbspy][4] (docs: <https://rbspy.github.io/>). It inspired a Python version called [py-spy][5] and a PHP profiler called [phpspy][6], both of which are excellent. I think py-spy in particular is [probably _better_][7] than rbspy which makes me really happy.
Writing a program that does something innovative (`top` for your Ruby programs functions!) and inspiring other people to make amazing new tools is something Im really proud of.
### started a side business!
A very surprising thing that happened in 2018 is that I started a business! This is the website: <https://wizardzines.com/>, and I sell programming zines.
Its been astonishingly successful (it definitely made me enough money that I could have lived on just the revenue from the business this year), and Im really grateful to everyones whos supported that work. I hope the zines have helped you. I always thought that it was impossible to make anywhere near as much money teaching people useful things as I can as a software developer, and now I think thats not true. I dont think that Id _want_ to make that switch (I like working as a programmer!), but now I actually think that if I was serious about it and was interested in working on my business skills, I could probably make it work.
I dont really know whats next, but I plan to write at least one zine next year. I learned a few things about business this year, mainly from:
* [stephanie hurlburts twitter][8]
* [amy hoy][9]
* the book [growing a business by paul hawken][10]
* seeing what joel hooks is doing with [egghead.io][11]
* a little from [indie hackers][12]
I used to think that sales / marketing had to be gross, but reading some of these business books made me think that its actually possible to run a business by being honest &amp; just building good things.
### work!
this is mostly about side projects, but a few things about work:
* I still have the same manager ([jay][13]). Hes been really great to work with. The [help! i have a manager!][14] zine is secretly largely things I learned from working with him.
* my team made some big networking infrastructure changes and it went pretty well. I learned a lot about proxies/TLS and a little bit about C++.
* I mentored another intern, and the intern I mentored last year joined us full time!
When I go back to work Im going to switch to working on something COMPLETELY DIFFERENT (writing code that sends messages to banks!) for 3 months. Its a lot closer to the companys core business, and I think itll be neat to learn more about how financial infastracture works.
I struggled a bit with understanding/defining my job this year. I wrote [Whats a senior engineers job?][15] about that, but I have not yet reached enlightenment.
### talks!
I gave 4 talks in 2018:
* [So you want to be a wizard][16] at StarCon
* [Building a Ruby profiler][17] at the Recurse Centers localhost series
* [Build Impossible Programs][18] in May at Deconstruct.
* [High Reliability Infrastructure Migrations][19] at Kubecon. Im pretty happy about this talk because Ive wanted to give a good talk about what I do at work for a long time and I think I finally succeeded. Previously when I gave talks about my work I think I fell into the trap of just describing what we do (“we do X Y Z” … “okay, so what?“). With this one, I think I was able to actually say things that were useful to other people.
In past years Ive mostly given talks which can mostly be summarized “here are some cool tools” and “here is how to learn hard things”. This year I changed focus to giving talks about the actual work I do there were two talks about building a Ruby profiler, and one about what I do at work (I spend a lot of time on infrastructure migrations!)
Im not sure whether if Ill give any talks in 2019. I travelled more than I wanted to in 2018, and to stay sane I ended up having to cancel on a talk I was planning to give with relatively short notice which wasnt good.
### podcasts!
I also experimented a bit with a new format: the podcast! These were basically all really fun! They dont take that long (about 2 hours total?).
* [Software Engineering Daily][20], on rbspy and how to use a profiler
* [FLOSS weekly][21], again about rbspy. They told me Im the guest that asked _them_ the most questions, which I took as a compliment :)
* [CodeNewbie][22] on computer networking &amp; how the Internet works
* [Hanselminutes with Scott Hanselman][23] on writing zines / teaching / learning
* [egghead.io][24], on making zines &amp; running a business
what I learned about doing podcasts:
* Its really important to give the hosts a list of good questions to ask, and to be prepared to give good answers to those questions! Im not a super polished podcast guest.
* you need a good microphone. At least one of these people told me I actually couldnt be on their podcast unless I had a good enough microphone, so I bought a [medium fancy microphone][25]. It wasnt too expensive and its nice to have a better quality microphone! Maybe I will use it more to record audio/video at some point!
### !!Con
I co-organized [!!Con][26] for the 4th time I ran sponsorships. Its always such a delight and the speakers are so great.
!!Con is expanding [to the west coast in 2019][27] Im not directly involved with that but its going to be amazing.
### blog posts!
I apparently wrote 54 blog posts in 2018. A couple of my favourites are [Whats a senior engineers job?][15] , [How to teach yourself hard things][28], and [batch editing files with ed][29].
There were basically 4 themes in blogging for 2018:
* progress on the rbspy project while I was working on it ([this category][30])
* computer networking / infrastructure engineering (basically all I did at work this year was networking, though I didnt write about it as much as I might have)
* musings about zines / business / developer education, for instance [why sell zines?][31] and [who pays to educate developers?][32]
* a few of the usual “how do you learn things” / “how do you succeed at your job” posts as I figure things about about that, for instance [working remotely, 4 years in][33]
### a tiny inclusion project: a guide to performance reviews
[Last year][3] in addition to my actual job, I did a couple of projects at work towards helping make sure the performance/promotion process works well for folks i collaborated with the amazing [karla][34] on the idea of a “brag document”, and redid our engineering levels.
This year, in the same vein, I wrote a document called the “Unofficial guide to the performance reviews”. A lot of folks said it helped them but probably its too early to celebrate. I think explaining to folks how the performance review process actually works and how to approach it is really valuable and I might try to publish a more general version here at some point.
I like that I work at a place where its possible/encouraged to do projects like this. I spend a relatively small amount of time on them (maybe I spent 15 hours on this one?) but it feels good to be able to make tiny steps towards building a better workplace from time to time. Its really hard to judge the results though!
### conclusions?
some things that worked in 2018:
* setting [boundaries][15] around what my job is
* doing open source work while being paid for it
* starting a side business
* doing small inclusion projects at work
* writing zines is very time consuming but I feel happy about the time I spent on that
* blogging is always great
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/12/23/2018--year-in-review/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2015/12/26/2015-year-in-review/
[2]: https://jvns.ca/blog/2016/12/21/2016--year-in-review/
[3]: https://jvns.ca/blog/2017/12/31/2017--year-in-review/
[4]: https://github.com/rbspy/rbspy
[5]: https://github.com/benfred/py-spy
[6]: https://github.com/adsr/phpspy/
[7]: https://jvns.ca/blog/2018/09/08/an-awesome-new-python-profiler--py-spy-/
[8]: https://twitter.com/sehurlburt
[9]: https://stackingthebricks.com/
[10]: https://www.amazon.com/Growing-Business-Paul-Hawken/dp/0671671642
[11]: https://egghead.io/
[12]: https://www.indiehackers.com/
[13]: https://twitter.com/jshirley
[14]: https://wizardzines.com/zines/manager/
[15]: https://jvns.ca/blog/senior-engineer/
[16]: https://www.youtube.com/watch?v=FBMC9bm-KuU
[17]: https://jvns.ca/blog/2018/04/16/rbspy-talk/
[18]: https://www.deconstructconf.com/2018/julia-evans-build-impossible-programs
[19]: https://www.youtube.com/watch?v=obB2IvCv-K0
[20]: https://softwareengineeringdaily.com/2018/06/05/profilers-with-julia-evans/
[21]: https://twit.tv/shows/floss-weekly/episodes/487
[22]: https://www.codenewbie.org/podcast/how-does-the-internet-work
[23]: https://hanselminutes.com/643/learning-how-to-be-a-wizard-programmer-with-julia-evans
[24]: https://player.fm/series/eggheadio-developer-chats-1728019/exploring-concepts-and-teaching-using-focused-zines-with-julia-evans
[25]: https://www.amazon.com/gp/product/B000EOPQ7E/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B000EOPQ7E&linkCode=as2&tag=diabeticbooks&linkId=ZBZBIVR4EB7V6JFL
[26]: http://bangbangcon.com
[27]: http://bangbangcon.com/west/
[28]: https://jvns.ca/blog/2018/09/01/learning-skills-you-can-practice/
[29]: https://jvns.ca/blog/2018/05/11/batch-editing-files-with-ed/
[30]: https://jvns.ca/categories/ruby-profiler/
[31]: https://jvns.ca/blog/2018/09/23/why-sell-zines/
[32]: https://jvns.ca/blog/2018/09/01/who-pays-to-educate-developers-/
[33]: https://jvns.ca/blog/2018/02/18/working-remotely--4-years-in/
[34]: https://karla.io/

View File

@ -0,0 +1,178 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Some nonparametric statistics math)
[#]: via: (https://jvns.ca/blog/2018/12/29/some-initial-nonparametric-statistics-notes/)
[#]: author: (Julia Evans https://jvns.ca/)
Some nonparametric statistics math
======
Im trying to understand nonparametric statistics a little more formally. This post may not be that intelligible because Im still pretty confused about nonparametric statistics, there is a lot of math, and I make no attempt to explain any of the math notation. Im working towards being able to explain this stuff in a much more accessible way but first I would like to understand some of the math!
Theres some MathJax in this post so the math may or may not render in an RSS reader.
Some questions Im interested in:
* what is nonparametric statistics exactly?
* what guarantees can we make? are there formulas we can use?
* why do methods like the bootstrap method work?
since these notes are from reading a math book and math books are extremely dense this is basically going to be “I read 7 pages of this math book and here are some points Im confused about”
### whats nonparametric statistics?
Today Im looking at “all of nonparametric statistics” by Larry Wasserman. He defines nonparametric inference as:
> a set of modern statistical methods that aim to keep the number of underlying assumptions as weak as possible
Basically my interpretation of this is that instead of assuming that your data comes from a specific family of distributions (like the normal distribution) and then trying to estimate the paramters of that distribution, you dont make many assumptions about the distribution (“this is just some data!!“). Not having to make assumptions is nice!
There arent **no** assumptions though he says
> we assume that the distribution $F$ lies in some set $\mathfrak{F}$ called a **statistical model**. For example, when estimating a density $f$, we might assume that $$ f \in \mathfrak{F} = \left\\{ g : \int(g^{\prime\prime}(x))^2dx \leq c^2 \right\\}$$ which is the set of densities that are not “too wiggly”.
I have not too much intuition for the condition $\int(g^{\prime\prime}(x))^2dx \leq c^2$. I calculated that integral for [the normal distribution on wolfram alpha][1] and got 4, which is a good start. (4 is not infinity!)
some questions I still have about this definition:
* whats an example of a probability density function that _doesnt_ satisfy that $\int(g^{\prime\prime}(x))^2dx \leq c^2$ condition? (probably something with an infinite number of tiny wiggles, and I dont think any distribution im interested in in practice would have an infinite number of tiny wiggles?)
* why does the density function being “too wiggly” cause problems for nonparametric inference? very unclear as yet.
### we still have to assume independence
One assumption we **wont** get away from is that the samples in the data were dealing with are independent. Often data in the real world actually isnt really independent, but I think the what people do a lot of the time is to make a good effort at something approaching independence and then close your eyes and pretend it is?
### estimating the density function
Okay! Heres a useful section! Lets say that I have 100,000 data points from a distribution. I can draw a histogram like this of those data points:
![][2]
If I have 100,000 data points, its pretty likely that that histogram is pretty close to the actual distribution. But this is math, so we should be able to make that statement precise, right?
For example suppose that 5% of the points in my sample are more than 100. Is the probability that a point is greater than 100 **actually** 0.05? The book gives a nice formula for this:
$$ \mathbb{P}(|\widehat{P}_n(A) - P(A)| &gt; \epsilon ) \leq 2e^{-2n\epsilon^2} $$
(by [“Hoeffdings inequality”][3] which Ive never heard of before). Fun aside about that inequality: heres a nice jupyter notebook by henry wallace using it to [identify the most common Boggle words][4].
here, in our example:
* n is 1000 (the number of data points we have)
* $A$ is the set of points more than 100
* $\widehat{P}_n(A)$ is the empirical probability that a point is more than 100 (0.05)
* $P(A)$ is the actual probability
* $\epsilon$ is how certain we want to be that were right
So, whats the probability that the **real** probability is between 0.04 and 0.06? $\epsilon = 0.01$, so its $2e^{-2 \times 100,000 \times (0.01)^2} = 4e^{-9} $ ish (according to wolfram alpha)
here is a table of how sure we can be:
* 100,000 data points: 4e-9 (TOTALLY CERTAIN that 4% - 6% of points are more than 100)
* 10,000 data points: 0.27 (27% probability that were wrong! thats… not bad?)
* 1,000 data points: 1.6 (we know the probability were wrong is less than.. 160%? thats not good!)
* 100 data points: lol
so basically, in this case, using this formula: 100,000 data points is AMAZING, 10,000 data points is pretty good, and 1,000 is much less useful. If we have 1000 data points and we see that 5% of them are more than 100, we DEFINITELY CANNOT CONCLUDE that 4% to 6% of points are more than 100. But (using the same formula) we can use $\epsilon = 0.04$ and conclude that with 92% probability 1% to 9% of points are more than 100. So we can still learn some stuff from 1000 data points!
This intuitively feels pretty reasonable to me like it makes sense to me that if you have NO IDEA what your distribution that with 100,000 points youd be able to make quite strong inferences, and that with 1000 you can do a lot less!
### more data points are exponentially better?
One thing that I think is really cool about this estimating the density function formula is that how sure you can be of your inferences scales **exponentially** with the size of your dataset (this is the $e^{-n\epsilon^2}$). And also exponentially with the square of how sure you want to be (so wanting to be sure within 0.01 is VERY DIFFERENT than within 0.04). So 100,000 data points isnt 10x better than 10,000 data points, its actually like 10000000000000x better.
Is that true in other places? If so that seems like a super useful intuition! I still feel pretty uncertain about this, but having some basic intuition about “how much more useful is 10,000 data points than 1,000 data points?“) feels like a really good thing.
### some math about the bootstrap
The next chapter is about the bootstrap! Basically the way the bootstrap works is:
1. you want to estimate some statistic (like the median) of your distribution
2. the bootstrap lets you get an estimate and also the variance of that estimate
3. you do this by repeatedly sampling with replacement from your data and then calculating the statistic you want (like the median) on your samples
Im not going to go too much into how to implement the bootstrap method because its explained in a lot of place on the internet. Lets talk about the math!
I think in order to say anything meaningful about bootstrap estimates I need to learn a new term: a **consistent estimator**.
### Whats a consistent estimator?
Wikipedia says:
> In statistics, a **consistent estimator** or **asymptotically consistent estimator** is an estimator — a rule for computing estimates of a parameter $\theta_0$ — having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to $\theta_0$.
This includes some terms where I forget what they mean (whats “converges in probability” again?). But this seems like a very good thing! If Im estimating some parameter (like the median), I would DEFINITELY LIKE IT TO BE TRUE that if I do it with an infinite amount of data then my estimate works. An estimator that is not consistent does not sound very useful!
### why/when are bootstrap estimators consistent?
spoiler: I have no idea. The book says the following:
> Consistency of the boostrap can now be expressed as follows.
>
> **3.19 Theorem**. Suppose that $\mathbb{E}(X_1^2) &lt; \infty$. Let $T_n = g(\overline{X}_n)$ where $g$ is continuously differentiable at $\mu = \mathbb{E}(X_1)$ and that $g\prime(\mu) \neq 0$. Then,
>
> $$ \sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{a.s.} 0 $$
>
> **3.21 Theorem**. Suppose that $T(F)$ is Hadamard differentiable with respect to $d(F,G)= sup_x|F(x)-G(x)|$ and that $0 &lt; \int L^2_F(x) dF(x) &lt; \infty$. Then,
>
> $$ \sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{P} 0 $$
things I understand about these theorems:
* the two formulas theyre concluding are the same, except I think one is about convergence “almost surely” and one about “convergence in probability”. I dont remember what either of those mean.
* I think for our purposes of doing Regular Boring Things we can replace “Hadamard differentiable” with “differentiable”
* I think they dont actually show the consistency of the bootstrap, theyre actually about consistency of the bootstrap confidence interval estimate (which is a different thing)
I dont really understand how theyre related to consistency, and in particular the $\sup_u$ thing is weird, like if youre looking at $\mathbb{P}(something &lt; u)$, wouldnt you want to minimize $u$ and not maximize it? Maybe its a typo and it should be $\inf_u$?
it concludes:
> there is a tendency to treat the bootstrap as a panacea for all problems. But the bootstrap requires regularity conditions to yield valid answers. It should not be applied blindly.
### this book does not seem to explain why the bootstrap is consistent
In the appendix (3.7) it gives a sketch of a proof for showing that estimating the **median** using the bootstrap is consistent. I dont think this book actually gives a proof anywhere that bootstrap estimates in general are consistent, which was pretty surprising to me. It gives a bunch of references to papers. Though I guess bootstrap confidence intervals are the most important thing?
### thats all for now
This is all extremely stream of consciousness and I only spent 2 hours trying to work through this, but some things I think I learned in the last couple hours are:
1. maybe having more data is exponentially better? (is this true??)
2. “consistency” of an estimator is a thing, not all estimators are consistent
3. understanding when/why nonparametric bootstrap estimators are consistent in general might be very hard (the proof that the bootstrap median estimator is consistent already seems very complicated!)
4. boostrap confidence intervals are not the same thing as bootstrap estimators. Maybe Ill learn the difference next!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/12/29/some-initial-nonparametric-statistics-notes/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://www.wolframalpha.com/input/?i=integrate+(d%2Fdx(d%2Fdx(exp(-x%5E2))))%5E2++dx+from+x%3D-infinity+to+infinity
[2]: https://jvns.ca/images/nonpar-histogram.png
[3]: https://en.wikipedia.org/wiki/Hoeffding%27s_inequality
[4]: https://nbviewer.jupyter.org/github/henrywallace/games/blob/master/boggle/boggle.ipynb#Estimating-Word-Probabilities

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A few early marketing thoughts)
[#]: via: (https://jvns.ca/blog/2019/01/29/marketing-thoughts/)
[#]: author: (Julia Evans https://jvns.ca/)
A few early marketing thoughts
======
At some point last month I said I might write more about business, so here are some very early marketing thoughts for my zine business (<https://wizardzines.com>!). The question Im trying to make some progress on in this post is: “how to do marketing in a way that feels good?”
### whats the point of marketing?
Okay! Whats marketing? Whats the point? I think the ideal way marketing works is:
1. you somehow tell a person about a thing
2. you explain somehow why the thing will be useful to them / why it is good
3. they buy it and they like the thing because its what they expected
(or, when you explain it they see that they dont want it and dont buy it which is good too!!)
So basically as far as I can tell good marketing is just explaining what the thing is and why it is good in a clear way.
### what internet marketing techniques do people use?
Ive been thinking a bit about internet marketing techniques I see people using on me recently. Here are a few examples of internet marketing techniques Ive seen:
1. word of mouth (“have you seen this cool new thing?!”)
2. twitter / instagram marketing (build a twitter/instagram account)
3. email marketing (“build a mailing list with a bajillion people on it and sell to them”)
4. email marketing (“tell your existing users about features that they already have that they might want to use”)
5. social proof marketing (“jane from georgia bought a sweater”), eg fomo.com
6. cart notifications (“you left this sweater in your cart??! did you mean to buy it? maybe you should buy it!“)
7. content marketing (which is fine but whenever people refer to my writing as content I get grumpy :))
### you need _some_ way to tell people about your stuff
Something that is definitely true about marketing is that you need some way to tell new people about the thing you are doing. So for me when Im thinking about running a business its less about “should i do marketing” and more like “well obviously i have to do marketing, how do i do it in a way that i feel good about?”
### whats up with email marketing?
I feel like every single piece of internet marketing advice I read says “you need a mailing list”. This is advice that I havent really taken to heart technically I have 2 mailing lists:
1. the RSS feed for this blog, which sends out new blog posts to a mailing list for folks who dont use RSS (which 3000 of you get)
2. <https://wizardzines.com>s list, for comics / new zine announcements (780 people subscribe to that! thank you!)
but definitely neither of them is a Machine For Making Sales and Ive put in almost no efforts in that direction yet.
here are a few things Ive noticed about marketing mailing lists:
* most marketing mailing lists are boring but some marketing mailing lists are actually interesting! For example I kind of like [amy hoy][1]s emails.
* Someone told me recently that they have 200,000 people on their mailing list (?!!) which made the “a mailing list is a machine for making money” concept make a lot more sense to me. I wonder if people who make a lot of money from their mailing lists all have huge 10k+ person mailing lists like this?
### what works for me: twitter
Right now for my zines business Id guess maybe 70% of my sales come from Twitter. The main thing I do is tweet pages from zines Im working on (for example: yesterdays [comic about ss][2]). The comics are usually good and fun so invariably they get tons of retweets, which means that I end up with lots of followers, which means that when I later put up the zine for sale lots of people will buy it.
And of course people dont _have_ to buy the zines, I post most of what ends up in my zines on twitter for free, so it feels like a nice way to do it. Everybody wins, I think.
(side note: when I started getting tons of new followers from my comics I was actually super worried that it would make my experience of Twitter way worse. That hasnt happened! the new followers all seem totally reasonable and I still get a lot of really interesting twitter replies which is wonderful ❤)
I dont try to hack/optimize this really: I just post comics when I make them and I try to make them good.
### a small Twitter innovation: putting my website on the comics
Heres one small marketing change that I made that I think makes sense!
In the past, I didnt put anything about how to buy my comics on the comics I posted on Twitter, just my Twitter username. Like this:
![][3]
After a while, I realized people were asking me all the time “hey, can I buy a book/collection? where do these come from? how do I get more?“! I think a marketing secret is “people actually want to buy things that are good, it is useful to tell people where they can buy things that are good”.
So just recently Ive started adding my website and a note about my current project on the comics I post on Twitter. It doesnt say much: just “❤ these comics? buy a collection! wizardzines.com” and “page 11 of my upcoming bite size networking zine”. Heres what it looks like:
![][4]
I feel like this strikes a pretty good balance between “julia you need to tell people what youre doing otherwise how are they supposed to buy things from you” and “omg too many sales pitches everywhere”? Ive only started doing this recently so well see how it goes.
### should I work on a mailing list?
It seems like the same thing that works on twitter would work by email if I wanted to put in the time (email people comics! when a zine comes out, email them about the zine and they can buy it if they want!).
One thing I LOVE about Twitter though is that people always reply to the comics I post with their own tips and tricks that they love and I often learn something new. I feel like email would be nowhere near as fun :)
But I still think this is a pretty good idea: keeping up with twitter can be time consuming and I bet a lot of people would like to get occasional email with programming drawings. (would you?)
One thing Im not sure about is a lot of marketing mailing lists seem to use somewhat aggressive techniques to get new emails (a lot of popups on a website, or adding everyone who signs up to their service / buys a thing to a marketing list) and while Im basically fine with that (unsubscribing is easy!), Im not sure that its what Id want to do, and maybe less aggressive techniques will work just as well? Well see.
### should I track conversion rates?
A piece of marketing advice I assume people give a lot is “be data driven, figure out what things convert the best, etc”. I dont do this almost at all gumroad used to tell me that most of my sales came from Twitter which was good to know, but right now I have basically no idea how it works.
Doing a bunch of work to track conversion rates feels bad to me: it seems like it would be really easy to go down a dumb rabbit hole of “oh, lets try to increase conversion by 5%” instead of just focusing on making really good and cool things.
My guess is that what will work best for me for a while is to have some data that tells me in broad strokes how the business works (like “about 70% of sales come from twitter”) and just leave it at that.
### should I do advertising?
I had a conversation with Kamal about this post that went:
* julia: “hmm, maybe I should talk about ads?”
* julia: “wait, are ads marketing?”
* kamal: “yes ads are marketing”
So, ads! I dont know anything about advertising except that you can advertise on Facebook or Twitter or Google. Some non-ethical questions I have about advertising:
* how do you choose what keywords to advertise on?
* are there actually cheap keywords, like is file descriptors cheap?
* how much do you need to pay per click? (for some weird linux keywords, google estimated 20 cents a click?)
* can you use ads effectively for something that costs $10?
This seems nontrivial to learn about and I dont think Im going to try soon.
### other marketing things
a few other things Ive thought about:
* I learned about “social proof marketing” sites like fomo.com yesterday which makes popups on your site like “someone bought COOL THING 3 hours ago”. This seems like it has some utility (people are actually buying things from me all the time, maybe thats useful to share somehow?) but those popups feel a bit cheap to me and I dont really think its something Id want to do right now.
* similarly a lot of sites like to inject these popups like “HELLO PLEASE SIGN UP FOR OUR MAILING LIST”. similar thoughts. Ive been putting an email signup link in the footer which seems like a good balance between discoverable and annoying. As an example of a popup which isnt too intrusive, though: nate berkopec has [one on his site][5] which feels really reasonable! (scroll to the bottom to see it)
Maybe marketing is all about “make your things discoverable without being annoying”? :)
### thats all!
Hopefully some of this was interesting! Obviously the most important thing in all of this is to make cool things that are useful to people, but I think cool useful writing does not actually sell itself!
If you have thoughts about what kinds of marketing have worked well for you / youve felt good about I would love to hear them!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/01/29/marketing-thoughts/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://stackingthebricks.com/
[2]: https://twitter.com/b0rk/status/1090058524137345025
[3]: https://jvns.ca/images/kill.jpeg
[4]: https://jvns.ca/images/ss.jpeg
[5]: https://www.speedshop.co/2019/01/10/three-activerecord-mistakes.html

View File

@ -0,0 +1,144 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (!!Con 2019: submit a talk!)
[#]: via: (https://jvns.ca/blog/2019/02/16/--con-2019--submit-a-talk-/)
[#]: author: (Julia Evans https://jvns.ca/)
!!Con 2019: submit a talk!
======
As some of you might know, for the last 5 years Ive been one of the organizers for a conferences called [!!Con][1]. This year its going to be held on **May 11-12 in NYC**.
The submission deadline is **Sunday, March 3** and you can [submit a talk here][2].
(we also expanded to the west coast this year: [!!Con West][3] is next week!! Im not on the !!Con West team since I live on the east coast but theyre doing amazing work, I have a ticket, and Im so excited for there to be more !!Con in the world)
### !!Con is about the joy, excitement, and surprise of computing
Computers are AMAZING. You can make programs that seem like magic, computer science has all kind of fun and surprising tidbits, there are all kinds of ways to make really cool art with computers, the systems that we use every day (like DNS!) are often super fascinating, and sometimes our computers do REALLY STRANGE THINGS and its very fun to figure out why.
!!Con is about getting together for 2 days to share what we all love about computing. The only rule of !!Con talks is that the talk has to have an exclamation mark in the title :)
We originally considered calling !!Con ExclamationMarkCon but that was too unwieldy so we went with !!Con :).
### !!Con is inclusive
The other big thing about !!Con is that we think computing should include everyone. To make !!Con a space where everyone can participate, we
* have open captioning for all talks (so that people who cant hear well can read the text of the talk as its happening). This turns out to be great for LOTS of people if you just werent paying attention for a second, you can look at the live transcript to see what you missed!
* pay our speakers &amp; pay for speaker travel
* have a code of conduct (of course)
* use the RC [social rules][4]
* make sure our washrooms work for people of all genders
* let people specify on their badges if they dont want photos taken of them
* do a lot of active outreach to make sure our set of speakers is diverse
### past !!Con talks
I think maybe the easiest way to explain !!Con if you havent been is through the talk titles! Here are a few arbitrarily chosen talks from past !!Cons:
* [Four Fake Filesystems!][5]
* [Islamic Geometry: Hankins Polygons in Contact Algorithm!!!][6]
* [Dont know about you, but Im feeling like SHA-2!: Checksumming with Taylor Swift][7]
* [MissingNo., my favourite Pokémon!][8]
* [Music! Programming! Arduino! (Or: Building Electronic Musical Interfaces to Create Awesome)][9]
* [How I Code and Use a Computer at 1,000 WPM!!][10]
* [The emoji that Killed Chrome!!][11]
* [We built a map to aggregate real-time flood data in under two days!][12]
* [PUSH THE BUTTON! 🔴 Designing a fun game where the only input is a BIG RED BUTTON! 🔴 !!!][13]
* [Serious programming with jq?! A practical and purely functional programming language!][14]
* [I wrote to a dead address in a deleted PDF and now I know where all the airplanes are!!][15]
* [Making Mushrooms Glow!][16]
* [HDR Photography in Microsoft Excel?!][17]
* [DHCP: ITS MOSTLY YELLING!!][18]
* [Lossy text compression, for some reason?!][19]
* [Plants are Recursive!!: Using L-Systems to Generate Realistic Weeds][20]
If you want to see more (or get an idea of what !!Con talk descriptions usually look like), heres every past year of the conference:
* 2018: [talk descriptions][21] and [recordings][22]
* 2017: [talk descriptions][23] and [recordings][24]
* 2016: [talk descriptions][25] and [recordings][26]
* 2015: [talk descriptions][27] and [recordings][28]
* 2014: [talk descriptions][29] and [recordings][30]
### this year you can also submit a play / song / performance!
One difference from previous !!Cons is that if you want submit a non-talk-talk to !!Con this year (like a play!), you can! Im very excited to see what people come up with. For more of that see [Expanding the !!Con aesthetic][31].
### all talks are reviewed anonymously
One big choice that weve made is to review all talks anonymously. This means that well review your talk the same way whether youve never given a talk before or if youre an internationally recognized public speaker. I love this because many of our best talks are from first time speakers or people who Id never heard of before, and I think anonymous review makes it easier to find great people who arent well known.
### writing a good outline is important
We cant rely on someones reputation to determine if theyll give a good talk, but we do need a way to see that people have a plan for how to present their material in an engaging way. So we ask everyone to give a somewhat detailed outline explaining how theyll spend their 10 minutes. Some people do it minute-by-minute and some people just say “Ill explain X, then Y, then Z, then W”.
Lindsey Kuper wrote some good advice about writing a clear !!Con outline here which has some examples of really good outlines [which you can see here][32].
### Were looking for sponsors
!!Con is pay-what-you-can (if you cant afford a $300 conference ticket, were the conference for you!). Because of that, we rely on our incredible sponsors (companies who want to build an inclusive future for tech with us!) to help make up the difference so that we can pay our speakers for their amazing work, pay for speaker travel, have open captioning, and everything else that makes !!Con the amazing conference it is.
If you love !!Con, a huge way you can help support the conference is to ask your company to sponsor us! Heres our [sponsorship page][33] and you can email me at [[email protected]][34] if youre interested.
### hope to see you there ❤
Ive met so many fantastic people through !!Con, and it brings me a lot of joy every year. The thing that makes !!Con great is all the amazing people who come to share what theyre excited about every year, and I hope youll be one of them.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/02/16/--con-2019--submit-a-talk-/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: http://bangbangcon.com
[2]: http://bangbangcon.com/give-a-talk.html
[3]: http://bangbangcon.com/west/
[4]: https://www.recurse.com/social-rules
[5]: https://youtube.com/watch?v=pfHpDDXJQVg
[6]: https://youtube.com/watch?v=ld4gpQnaziU
[7]: https://youtube.com/watch?v=1QgamEwwPro
[8]: https://youtube.com/watch?v=yX7tDROZUt8
[9]: https://youtube.com/watch?v=67Y-wH0FJFg
[10]: https://youtube.com/watch?v=G1r55efei5c
[11]: https://youtube.com/watch?v=UE-fJjMasec
[12]: https://youtube.com/watch?v=hfatYo2J8gY
[13]: https://youtube.com/watch?v=KqEc2Ek4GzA
[14]: https://youtube.com/watch?v=PS_9pyIASvQ
[15]: https://youtube.com/watch?v=FhVob_sRqQk
[16]: https://youtube.com/watch?v=T75FvUDirNM
[17]: https://youtube.com/watch?v=bkQJdaGGVM8
[18]: https://youtube.com/watch?v=enRY9jd0IJw
[19]: https://youtube.com/watch?v=meovx9OqWJc
[20]: https://youtube.com/watch?v=0eXg4B1feOY
[21]: http://bangbangcon.com/2018/speakers.html
[22]: http://bangbangcon.com/2018/recordings.html
[23]: http://bangbangcon.com/2017/speakers.html
[24]: http://bangbangcon.com/2017/recordings.html
[25]: http://bangbangcon.com/2016/speakers.html
[26]: http://bangbangcon.com/2016/recordings.html
[27]: http://bangbangcon.com/2015/speakers.html
[28]: http://bangbangcon.com/2015/recordings.html
[29]: http://bangbangcon.com/2014/speakers.html
[30]: http://bangbangcon.com/2014/recordings.html
[31]: https://organicdonut.com/2019/01/expanding-the-con-aesthetic/
[32]: http://composition.al/blog/2017/06/30/how-to-write-a-timeline-for-a-bangbangcon-talk-proposal/
[33]: http://bangbangcon.com/sponsors
[34]: https://jvns.ca/cdn-cgi/l/email-protection

View File

@ -0,0 +1,155 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Organizing this blog into categories)
[#]: via: (https://jvns.ca/blog/2019/02/17/organizing-this-blog-into-categories/)
[#]: author: (Julia Evans https://jvns.ca/)
Organizing this blog into categories
======
Today I organized the front page of this blog ([jvns.ca][1]) into CATEGORIES! Now it is actually possible to make some sense of what is on here!! There are 28 categories (computer networking! learning! “how things work”! career stuff! many more!) I am so excited about this.
How it works: Every post is in only 1 category. Obviously the categories arent “perfect” (there is a “how things work” category and a “kubernetes” category and a “networking” category, and so for a “how container networking works in kubernetes” I need to just pick one) but I think its really nice and Im hoping that itll make the blog easier for folks to navigate.
If youre interested in more of the story of how Im thinking about this: Ive been a little dissatisfied for a long time with how this blog is organized. Heres where I started, in 2013, with a pretty classic blog layout (this is Octopress, which was a Jekyll Wordpress-lookalike theme that was cool back then and which served me very well for a long time):
![][2]
### problem with “show the 5 most recent posts”: you dont know what the persons writing is about!
This is a super common way to organize a blog: on the homepage of your blog, you display maybe the 5 most recent posts, and then maybe have a “previous” link.
The thing I find tricky about this (as a blog reader) is that
1. its hard to hunt through their back catalog to find cool things theyve written
2. its SO HARD to get an overall sense for the body of a persons work by reading 1 blog post at a time
### next attempt: show every post in chronological order
My next attempt at blog organization was to show every post on the homepage in chronological order. This was inspired by [Dan Luus blog][3], which takes a super minimal approach. I switched to this (according to the internet archive) sometime in early 2016. Heres what it looked like (with some CSS issues :))
![][4]
The reason I like this “show every post in chronological order” approach more is that when I discover a new blog, I like to obsessively binge read through the whole thing to see all the cool stuff the person has written. [Rachel by the bay][5] also organizes her writing this way, and when I found her blog I was like OMG WOW THIS IS AMAZING I MUST READ ALL OF THIS NOW and being able to look through all the entries quickly and start reading ones that caught my eye was SO FUN.
[Will Larsons blog][6] also has a “list of all posts” page which I find useful because its a good blog, and sometimes I want to refer back to something he wrote months ago and cant remember what it was called, and being able to scan through all the titles makes it easier to do that.
I was pretty happy with this and thats how its been for the last 3 years.
### problem: a chronological list of 390 posts still kind of sucks
As of today, I have 390 posts here (360,000 words! thats, like, 4 300-page books! eep!). This is objectively a lot of writing and I would like people new to the blog to be able to navigate it and actually have some idea whats going on.
And this blog is not actually just a totally disorganized group of words! I have a lot of specific interests: Ive written probably 30 posts about computer networking, 15ish on ML/statistics, 20ish career posts, etc. And when I write a new Kubernetes post or whatever, its usually at least sort of related to some ongoing train of thought I have about Kubernetes. And its totally obvious to _me_ what other posts that post is related to, but obviously to a new person its not at all clear what the trains of thought are in this blog.
### solution for now: assign every post 1 (just 1) category
My new plan is to assign every post a single category. I got this idea from [Itamar Turner-Traurings site][7].
Here are the initial categories:
* Cool computer tools / features / ideas
* Computer networking
* How a computer thing works
* Kubernetes / containers
* Zines / comics
* On writing comics / zines
* Conferences
* Organizing conferences
* Businesses / marketing
* Statistics / machine learning / data analysis
* Year in review
* Infrastructure / operations engineering
* Career / work
* Working with others / communication
* Remote work
* Talks transcripts / podcasts
* On blogging / speaking
* On learning
* Rust
* Linux debugging / tracing tools
* Debugging stories
* Fan posts about awesome work by other people
* Inclusion
* rbspy
* Performance
* Open source
* Linux systems stuff
* Recurse Center (my daily posts during my RC batch)
I guess you can tell this is a systems-y blog because there are 8 different systems-y categories (kubernetes, infrastructure, linux debugging tools, rust, debugging stories, performance, and linux systems stuff, how a computer thing works) :).
But it was nice to see that I also have this huge career / work category! And that category is pretty meaningful to me, it includes a lot of things that I struggled with and were hard for me to learn. And I get to put all my machine learning posts together, which is an area I worked in for 3 years and am still super interested in and every so often learn a new thing about!
### How I assign the categories: a big text file
I came up with a scheme for assigning the categories that I thought was really fun! I knew immediately that coming up with categories in advance would be impossible (how was I supposed to know that “fan posts about awesome work by other people” was a substantial category?)
So instead, I took kind of a Marie Kondo approach: I wrote a script to just dump all the titles of every blog post into a text file, and then I just used vim to organize them roughly into similar sections. Seeing everything in one place (a la marie kondo) really helped me see the patterns and figure out what some categories were.
[Heres the final result of that text file][8]. I think having a lightweight way of organizing the posts all in one file made a huge difference and that it would have been impossible for me to seen the patterns otherwise.
### How I implemented it: a hugo taxonomy
Once I had that big text file, I wrote [a janky python script][9] to assign the categories in that text file to the actual posts.
I use Hugo for this blog, and so I also needed to tell Hugo about the categories. This blog already technically has tags (though theyre woefully underused, I didnt want to delete them). I use Hugo, and it turns out that in Hugo you can define arbitrary taxonomies. So I defined a new taxonomy for these sections (right now its called, unimaginitively, `juliasections`).
The details of how I did this are pretty boring but [heres the hugo template that makes it display on the homepage][10]. I used this [Hugo documentation page on taxonomies a lot][11].
### organizing my site is cool! reverse chronology maybe isnt the best possible thing!
Amy Hoy has this interesting article called [how the blog broke the web][12] about how the rise of blog software made people adopt a site format that maybe didnt serve what they were writing the best.
I dont personally feel that mad about the blog / reverse chronology organization: I like blogging! I think it was nice for the first 6 years or whatever to be able to just write things that I think are cool without thinking about where they “fit”. Its worked really well for me.
But today, 360,000 words in, I think it makes sense to add a little more structure :).
### what it looks like now!
Heres what the new front page organization looks like! These are the blogging / learning / rust sections! I think its cool how you can see the evolution of some of my thinking (I sure have written a lot of posts about asking questions :)).
![][13]
### I ❤ the personal website
This is also part of why I love having a personal website that I can organize any way I want: for both of my main sites ([jvns.ca][1] and now [wizardzines.com][14]) I have total control over how they appear! And I can evolve them over time at my own pace if I decide something a little different will work better for me. Ive gone from a jekyll blog to octopress to a custom-designed octopress blog to Hugo and made a ton of little changes over time. Its so nice.
I think its fun that these 3 screenshots are each 3 years apart what I wanted in 2013 is not the same as 2016 is not the same as 2019! This is okay!
And I really love seeing how other people choose to organize their personal sites! Please keep making cool different personal sites.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/02/17/organizing-this-blog-into-categories/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca
[2]: https://jvns.ca/images/website-2013.png
[3]: https://danluu.com
[4]: https://jvns.ca/images/website-2016.png
[5]: https://rachelbythebay.com/w/
[6]: https://lethain.com/all-posts/
[7]: https://codewithoutrules.com/worklife/
[8]: https://github.com/jvns/jvns.ca/blob/2f7b2723994628a5348069dd87b3df68c2f0285c/scripts/titles.txt
[9]: https://github.com/jvns/jvns.ca/blob/2f7b2723994628a5348069dd87b3df68c2f0285c/scripts/parse_titles.py
[10]: https://github.com/jvns/jvns.ca/blob/25d239a3ba36c1bae1d055d2b7d50a4f1d0489ef/themes/orange/layouts/index.html#L39-L59
[11]: https://gohugo.io/templates/taxonomy-templates/
[12]: https://stackingthebricks.com/how-blogs-broke-the-web/
[13]: https://jvns.ca/images/website-2019.png
[14]: https://wizardzines.com

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New zine: Bite Size Networking!)
[#]: via: (https://jvns.ca/blog/2019/03/15/new-zine--bite-size-networking-/)
[#]: author: (Julia Evans https://jvns.ca/)
New zine: Bite Size Networking!
======
Last week I released a new zine: Bite Size Networking! Its the third zine in the “bite size” series:
1. [Bite Size Linux][1]
2. [Bite Size Command Line][2]
3. [Bite Size Networking][3]
You can get it for $10 at <https://wizardzines.com/zines/bite-size-networking/>! (or $150/$250/$600 for the corporate rate).
Heres the cover and table of contents!
[![][4]][5] <https://jvns.ca/images/bite-size-networking-toc.png>
A few people have asked for a 3-pack with all 3 “bite size” zines which is coming soon!
### why this zine?
In last few years Ive been doing a lot of networking at work, and along the way Ive gone from “uh, what even is tcpdump” to “yes I can just type in `sudo tcpdump -c 200 -n port 443 -i lo`” without even thinking twice about it. As usual this zine is the resource I wish I had 4 years ago. There are so many things it took me a long time to figure out how to do like:
* inspect SSL certificates
* make DNS queries
* figure out what server is using that port
* find out whether the firewall is causing you problems or not
* capture / search network traffic on a machine
and as often happens with computers none of them are really that hard!! But the man pages for the tols you need to do these things are Very Long and as usual dont differentiate between “everybody always uses this option and you 10000% need to know it” and “you will never use this option it does not matter”. So I spent a long time staring sadly at the tcpdump man page.
the pitch for this zine is:
> Its Thursday afternoon and your users are reporting SSL errors in production and you dont know why. Or a HTTP header isnt being set correctly and its breaking the site. Or you just got a notification that your sites SSL certificate is expiring in 2 days. Or you need to update DNS to point to a new server. Or a server suddenly isnt able to connect to a service. And networking maybe isnt your full time job, but you still need to get the problem fixed.
Kamal (my partner) proofreads all my zines and we hit an exciting milestone with this one: this is the first zine where he was like “wow, I really did not know a lot of the stuff in this zine”. This is of course because Ive spent a lot more time than him debugging weird networking things, and when you practice something you get better at it :)
### a couple of example pages
Here are a couple of example pages, to give you an idea of whats in the zine:
![][6] ![][7]
### next thing to get better at: getting feedback!
One thing Ive realized that while I get a ton of help from people while writing these zines (I read probably a thousand tweets from people suggesting ideas for things to include in the zine), I dont get as much feedback from people about the final product as Id like!
I often hear positive things (“I love them!”, “thank you so much!”, “this helped me in my job!”) but Id really love to hear more about which bits specifically helped the most and what didnt make as much sense or what you would have liked to see more of. So Ill probably be asking a few questions about that to people who buy this zine!
### selling zines is going well
When I made the switch about a year ago from “every zine I release is free” to “the old zines are free but all the new ones are not free” it felt scary! Its been startlingly totally fine and a very positive thing. Sales have been really good, people take the work more seriously, I can spend more time on them, and I think the quality has gone up.
And Ive been doing occasional [giveaways][8] for people who cant afford a $10 zine, which feels like a nice way to handle “some people legitimately cant afford $10 and I would like to get them information too”.
### whats next?
Im not sure yet! A few options:
* kubernetes
* more about linux concepts (bite size linux part II)
* how to do statistics using simulations
* something else!
Well see what I feel most inspired by :)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/03/15/new-zine--bite-size-networking-/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://wizardzines.com/zines/bite-size-linux/
[2]: https://wizardzines.com/zines/bite-size-command-line/
[3]: https://wizardzines.com/zines/bite-size-networking/
[4]: https://jvns.ca/images/bite-size-networking-cover.png
[5]: https://gum.co/bite-size-networking
[6]: https://jvns.ca/images/ngrep.png
[7]: https://jvns.ca/images/ping.png
[8]: https://twitter.com/b0rk/status/1104368319816220674

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why are monoidal categories interesting?)
[#]: via: (https://jvns.ca/blog/2019/03/26/what-are-monoidal-categories/)
[#]: author: (Julia Evans https://jvns.ca/)
Why are monoidal categories interesting?
======
Hello! Someone on Twitter asked a question about tensor categories recently and I remembered “oh, I know something about that!! These are a cool thing!“. Monoidal categories are also called “tensor categories” and I think that term feels a little more concrete: one of the biggest examples of a tensor category is the category of vector spaces with the tensor product as the way you combine vectors / functions. “Monoidal” means “has an associative binary operation with an identity”, and with vector spaces the tensor product is the “associative binary operation” its referring to. So Im going to mostly use “tensor categories” in this post instead.
So heres a quick stab at explaining why tensor categories are cool. Im going to make a lot of oversimplifications which I figure is better than trying to explain category theory from the ground up. Im not a category theorist (though I spent 2 years in grad school doing a bunch of category theory) and I will almost certainly say wrong things about category theory.
In this post Im going to try to talk about [Seven Sketches in Compositionality: An Invitation to Applied Category Theory][1] using mostly plain English.
### tensor categories arent monads
If you have been around functional programming for a bit, you might see the word “monoid” and “categories” and wonder “oh, is julia writing about monads, like in Haskell”? I am not!!
There is a sentence “monads are a monoid in the category of endofunctors” which includes both the word “monoid” and “category” but that is not what I am talking about at all. Were not going to talk about types or Haskell or monads or anything.
#### tensor categories are about proving (or defining) things with pictures
Heres what I think is a really nice example from this [“seven sketches in compositionality”]((<https://arxiv.org/pdf/1803.05316.pdf>) PDF (on page 47):
![][2]
The idea here is that you have 3 inequalities
1. `t <= v + w`
2. `w + u <= x + z`
3. `v + x <= y`,
and you want to prove that `t + u <= y + z`.
You can do this algebraically pretty easily.
But in this diagram theyve done something really different! Theyve sort of drawn the inequalities as boxes with lines coming out of them for each variable, and then you can see that you end up with a `t` and a `u` on the left and a `y` and a `z` on the right, and so maybe that means that `t + u <= y + z`.
The first time I saw something like this in a math class I felt like what? what is happening? you cant just draw PICTURES to prove things?!! And of course you cant _just_ draw pictures to prove things.
Whats actually happening in pictures like this is that when you put 2 things next to each other in the picture (like `t` and `u`), that actually represents the “tensor product” of `t` and `u`. In this case the “tensor product” is defined to be addition. And the tensor product (addition in this case) has some special properties
1. its associative
2. if `a <= b` and `c <= d` then `a + c <= b + d`
so saying that this picture proves that `t + u <= y + z` **actually** means that you can read a proof off the diagram in a straightforward way:
```
t + u
<= (v + w) + u
= v + (w + u)
<= v + (x + z)
= (v + x) + z
<= y + z
```
So all the things that “look like they would work” according to the picture actually do work in practice because our tensor product thing is associative and because addition works nicely with the `<=` relationship. The book explains all this in a lot more detail.
### draw vector spaces with “string diagrams”
Proving this simple inequality is kind of boring though! We want to do something more interesting, so lets talk about vector spaces! Heres a diagram that includes some vector spaces (U1, U2, V1, V2) and some functions (f,g) between them.
![][3]
Again, here what it means to have U1 stacked on top of U2 is that were taking a tensor product of U1 and U2. And the tensor product is associative, so theres no ambiguity if we stack 3 or 4 vector spaces together!
This is all explained in a lot more detail in this nice blog post called [introduction to string diagrams][4] (which I took that picture from).
### define the trace of a matrix with a picture
So far this is pretty boring! But in a [follow up blog post][5], they talk about something more outrageous: you can (using vector space duality) take the lines in one of these diagrams and move them **backwards** and make loops. So that lets us define the trace of a function `f : V -> V` like this:
![][6]
This is a really outrageous thing! Weve said, hey, we have a function and we want to get a number in return right? Okay, lets just… draw a circle around it so that there are no lines left coming out of it, and then that will be a number! That seems a lot more natural and prettier than the usual way of defining the trace of a matrix (“sum up the numbers on the diagonal”)!
When I first saw this I thought it was super cool that just drawing a circle is actually a legitimate way of defining a mathematical concept!
### how are tensor category diagrams different from regular category theory diagrams?
If you see “tensor categories let you prove things with pictures” you might think “well, the whole point of category theory is to prove things with pictures, so what?“. I think there are a few things that are different in tensor category diagrams:
1. with string diagrams, the lines are objects and the boxes are functions which is the opposite of how usual category theory diagrams are
2. putting things next to each other in the diagram has a specific meaning (“take the tensor product of those 2 things”) where as in usual category theory diagrams it doesnt. being able to combine things in this way is powerful!
3. half circles have a specific meaning (“take the dual”)
4. you can use specific elements of a (eg vector space) in a diagram which usually you wouldnt do in a category theory diagram (the objects would be the whole vector space, not one element of that vector space)
### what does this have to do with programming?
Even though this is usually a programming blog I dont know whether this particular thing really has anything to do with programming, I just remembered I thought it was cool. I wrote my [masters thesis][7] (which i will link to even though its not very readable) on topological quantum computing which involves a bunch of monoidal categories.
Some of the diagrams in this post are sort of why I got interested in that area in the first place I thought it was really cool that you could formally define / prove things with pictures. And useful things, like the trace of a matrix!
### edit: some ways this might be related to programming
Someone pointed me to a couple of twitter threads (coincidentally from this week!!) that relate tensor categories &amp; diagrammatic methods to programming:
1. [this thread from @KenScambler][8] (“My best kept secret* is that string &amp; wiring diagramsplucked straight out of applied category theoryare _fabulous_ for software and system design.)
2. [this other thread by him of 31 interesting related things to this topic][9]
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/03/26/what-are-monoidal-categories/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://arxiv.org/pdf/1803.05316.pdf
[2]: https://jvns.ca/images/monoidal-preorder.png
[3]: https://jvns.ca/images/tensor-vector.png
[4]: https://qchu.wordpress.com/2012/11/05/introduction-to-string-diagrams/
[5]: https://qchu.wordpress.com/2012/11/06/string-diagrams-duality-and-trace/
[6]: https://jvns.ca/images/trace.png
[7]: https://github.com/jvns/masters-thesis/raw/master/thesis.pdf
[8]: https://twitter.com/KenScambler/status/1108738366529400832
[9]: https://twitter.com/KenScambler/status/1109474342822244353

View File

@ -0,0 +1,184 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What does debugging a program look like?)
[#]: via: (https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/)
[#]: author: (Julia Evans https://jvns.ca/)
What does debugging a program look like?
======
I was debugging with a friend whos a relatively new programmer yesterday, and showed them a few debugging tips. Then I was thinking about how to teach debugging this morning, and [mentioned on Twitter][1] that Id never seen a really good guide to debugging your code. (there are a ton of really great replies by Anne Ogborn to that tweet if you are interested in debugging tips)
As usual, I got a lot of helpful answers and now I have a few ideas about how to teach debugging skills / describe the process of debugging.
### a couple of debugging resources
I was hoping for more links to debugging books/guides, but here are the 2 recommendations I got:
**“Debugging” by David Agans**: Several people recommended the book [Debugging][2], which looks like a nice and fairly short book that explains a debugging strategy. I havent read it yet (though I ordered it to see if I should be recommending it) and the rules laid out in the book (“understand the system”, “make it fail”, “quit thinking and look”, “divide and conquer”, “change one thing at a time”, “keep an audit trail”, “check the plug”, “get a fresh view”, and “if you didnt fix it, it aint fixed”) seem extremely resaonable :). He also has a charming [debugging poster][3].
**“How to debug” by John Regehr**: [How to Debug][4] is a very good blog post based on Regehrs experience teaching a university embedded systems course. Lots of good advice. He also has a [blog post reviewing 4 books about debugging][5], including Agans book.
### reproduce your bug (but how do you do that?)
The rest of this post is going to be an attempt to aggregate different ideas about debugging people tweeted at me.
Somewhat obviously, everybody agrees that being able to consistently reproduce a bug is important if you want to figure out whats going on. I have an intuitive sense for how to do this but Im not sure how to **explain** how to go from “I saw this bug twice” to “I can consistently reproduce this bug on demand on my laptop”, and I wonder whether the techniques you use to do this depend on the domain (backend web dev, frontend, mobile, games, C++ programs, embedded etc).
### reproduce your bug _quickly_
Everybody also agrees that its extremely useful be able to reproduce the bug quickly (if it takes you 3 minutes to check if every change helped, iterating is VERY SLOW).
A few suggested approaches:
* for something that requires clicking on a bunch of things in a browser to reproduce, recording what you clicked on with [Selenium][6] and getting Selenium to replay the UI interactions (suggested [here][7])
* writing a unit test that reproduces the bug (if you can). bonus: you can add this to your test suite later if it makes sense
* writing a script / finding a command line incantation that does it (like `curl MY_APP.local/whatever`)
### accept that its probably your codes fault
Sometimes I see a problem and Im like “oh, library X has a bug”, “oh, its DNS”, “oh, SOME OTHER THING THAT IS NOT MY CODE is broken”. And sometimes its not my code! But in general between an established library and my code that I wrote last month, usually its my code that I wrote last month thats the problem :).
### start doing experiments
@act_gardner gave a [nice, short explanation of what you have to do after you reproduce your bug][8]
> I try to encourage people to first fully understand the bug - Whats happening? What do you expect to happen? When does it happen? When does it not happen? Then apply their mental model of the system to guess at what could be breaking and come up with experiments.
>
> Experiments could be changing or removing code, making API calls from a REPL, trying new inputs, poking at memory values with a debugger or print statements.
I think the loop here may be:
* make guess about one aspect about what might be happening (“this variable is set to X where it should be Y”, “the server is being sent the wrong request”, “this code is never running at all”)
* do experiment to check that guess
* repeat until you understand whats going on
### change one thing at a time
Everybody definitely agrees that it is important to change one thing a time when doing an experiment to verify an assumption.
### check your assumptions
A lot of debugging is realizing that something you were **sure** was true (“wait this request is going to the new server, right, not the old one???“) is actually… not true. I made an attempt to [list some common incorrect assumptions][9]. Here are some examples:
* this variable is set to X (“that filename is definitely right”)
* that variables value cant possibly have changed between X and Y
* this code was doing the right thing before
* this function does X
* Im editing the right file
* there cant be any typos in that line I wrote it is just 1 line of code
* the documentation is correct
* the code Im looking at is being executed at some point
* these two pieces of code execute sequentially and not in parallel
* the code does the same thing when compiled in debug / release mode (or with -O2 and without, or…)
* the compiler is not buggy (though this is last on purpose, the compiler is only very rarely to blame :))
### weird methods to get information
There are a lot of normal ways to do experiments to check your assumptions / guesses about what the code is doing (print out variable values, use a debugger, etc). Sometimes, though, youre in a more difficult environment where you cant print things out and dont have access to a debugger (or its inconvenient to do those things, maybe because there are too many events). Some ways to cope:
* [adding sounds on mobile][10]: “In the mobile world, I live on this advice. Xcode can play a sound when you hit a breakpoint (and continue without stopping). I place them certain places in the code, and listen for buzzing Tink to indicate tight loops or Morse/Pop pairs to catch unbalanced events” (also [this tweet][11])
* theres a very cool talk about [using XCode to play sound for iOS debugging here][12]
* [adding LEDs][13]: “When I did embedded dev ages ago on grids of transputers, we wired up an LED to an unused pin on each chip. It was surprisingly effective for diagnosing parallelism issues.”
* [string][14]: “My networks prof told me about a hack he saw at Xerox in the early days of Ethernet: a tap in the coax with an amp and motor and piece of string. The busier the network was, the faster the string twirled.”
* [peep][15] is a “network auralizer” that translates whats happening on your system into sounds. I spent 10 minutes trying to get it to compile and failed so far but it looks very fun and I want to try it!!
The point here is that information is the most important thing and you need to do whatevers necessary to get information.
### write your code so its easier to debug
Another point a few people brought up is that you can improve your program to make it easier to debug. tef has a nice post about this: [Write code thats easy to delete, and easy to debug too.][16] here. I thought this was very true:
> Debuggable code isnt necessarily clean, and code thats littered with checks or error handling rarely makes for pleasant reading.
I think one interpretation of “easy to debug” is “every single time theres an error, the program reports to you exactly what happened in an easy to understand way”. Whenever my program has a problem and says sometihng “error: failure to connect to SOME_IP port 443: connection timeout” Im like THANK YOU THAT IS THE KIND OF THING I WANTED TO KNOW and I can check if I need to fix a firewall thing or if I got the wrong IP for some reason or what.
One simple example of this recently: I was making a request to a server I wrote and the reponse I got was “upstream connect error or disconnect/reset before headers”. This is an nginx error which basically in this case boiled down to “your program crashed before it sent anything in response to the request”. Figuring out the cause of the crash was pretty easy, but having better error handling (returning an error instead of crashing) would have saved me a little time because instead of having to go check the cause of the crash, I could have just read the error message and figured out what was going on right away.
### error messages are better than silently failing
To get closer to the dream of “every single time theres an error, the program reports to you exactly what happened in an easy to understand way” you also need to be disciplined about immediately returning an error message instead of silently writing incorrect data / passing a nonsense value to another function which will do WHO KNOWS WHAT with it and cause you a gigantic headache. This means adding code like this:
```
if UNEXPECTED_THING:
raise "oh no THING happened"
```
This isnt easy to get right (its not always obvious where you should be raising errors!“) but it really helps a lot.
### failure: print out a stack of errors, not just one error.
Related to returning helpful errors that make it easy to debug: Rust has a really incredible error handling library [called failure][17] which basicaly lets you return a chain of errors instead of just one error, so you can print out a stack of errors like:
```
"error starting server process" caused by
"error initializing logging backend" caused by
"connection failure: timeout connecting to 1.2.3.4 port 1234".
```
This is SO MUCH MORE useful than just `connection failure: timeout connecting to 1.2.3.4 port 1234` by itself because it tells you the significance of 1.2.3.4 (its something to do with the logging backend!). And I think its also more useful than `connection failure: timeout connecting to 1.2.3.4 port 1234` with a stack trace, because it summarizes at a high level the parts that went wrong instead of making you read all the lines in the stack trace (some of which might not be relevant!).
tools like this in other languages:
* Go: the idiom to do this seems to be to just concatenate your stack of errors together as a big string so you get “error: thing one: error: thing two : error: thing three” which works okay but is definitely a lot less structured than `failure`s system
* Java: I hear you can give exceptions causes but havent used that myself
* Python 3: you can use `raise ... from` which sets the `__cause__` attribute on the exception and then your exceptions will be separated by `The above exception was the direct cause of the following exception:..`
If you know how to do this in other languages Id be interested to hear!
### understand what the error messages mean
One sub debugging skill that I take for granted a lot of the time is understanding what error messages mean! I came across this nice graphic explaining [common Python errors and what they mean][18], which breaks down things like `NameError`, `IOError`, etc.
I think a reason interpreting error messages is hard is that understanding a new error message might mean learning a new concept `NameError` can mean “Your code uses a variable outside the scope where its defined”, but to really understand that you need to understand what variable scope is! I ran into this a lot when learning Rust the Rust compiler would be like “you have a weird lifetime error” and Id like be “ugh ok Rust I get it I will go actually learn about how lifetimes work now!“.
And a lot of the time error messages are caused by a problem very different from the text of the message, like how “upstream connect error or disconnect/reset before headers” might mean “julia, your server crashed!“. The skill of understanding what error messages mean is often not transferable when you switch to a new area (if I started writing a lot of React or something tomorrow, I would probably have no idea what any of the error messages meant!). So this definitely isnt just an issue for beginner programmers.
### thats all for now!
I feel like the big thing Im missing when talking about debugging skills is a stronger understanding of where people get stuck with debugging its easy to say “well, you need to reproduce the problem, then make a more minimal reproduction, then start coming up with guesses and verifying them, and improve your mental model of the system, and then figure it out, then fix the problem and hopefully write a test to make it not come back”, but where are people actually getting stuck in practice? What are the hardest parts? I have some sense of what the hardest parts usually are for me but Im still not sure what the hardest parts usually are for someone newer to debugging their code.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/1142825259546140673
[2]: http://debuggingrules.com/
[3]: http://debuggingrules.com/?page_id=40
[4]: https://blog.regehr.org/archives/199
[5]: https://blog.regehr.org/archives/849
[6]: https://www.seleniumhq.org/
[7]: https://twitter.com/AnnieTheObscure/status/1142843984642899968
[8]: https://twitter.com/act_gardner/status/1142838587437830144
[9]: https://twitter.com/b0rk/status/1142812831420768257
[10]: https://twitter.com/cocoaphony/status/1142847665690030080
[11]: https://twitter.com/AnnieTheObscure/status/1142842421954244608
[12]: https://qnoid.com/2013/06/08/Sound-Debugging.html
[13]: https://twitter.com/wombatnation/status/1142887843963867136
[14]: https://twitter.com/irvingreid/status/1142887472441040896
[15]: http://peep.sourceforge.net/intro.html
[16]: https://programmingisterrible.com/post/173883533613/code-to-debug
[17]: https://github.com/rust-lang-nursery/failure
[18]: https://pythonforbiologists.com/29-common-beginner-errors-on-one-page/

View File

@ -0,0 +1,256 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get your work recognized: write a brag document)
[#]: via: (https://jvns.ca/blog/brag-documents/)
[#]: author: (Julia Evans https://jvns.ca/)
Get your work recognized: write a brag document
======
Theres this idea that, if you do great work at your job, people will (or should!) automatically recognize that work and reward you for it with promotions / increased pay. In practice, its often more complicated than that some kinds of important work are more visible/memorable than others. Its frustrating to have done something really important and later realize that you didnt get rewarded for it just because the people making the decision didnt understand or remember what you did. So I want to talk about a tactic that I and lots of people I work with have used!
This blog post isnt just about being promoted or getting raises though. The ideas here have actually been more useful to me to help me reflect on themes in my work, whats important to me, what Im learning, and what Id like to be doing differently. But theyve definitely helped with promotions!
You can also [skip to the brag document template at the end][1].
### you dont remember everything you did
One thing Im always struck by when it comes to performance review time is a feeling of “wait, what _did_ I do in the last 6 months?“. This is a kind of demoralizing feeling and its usually not based in reality, more in “I forgot what cool stuff I actually did”.
I invariably end up having to spend a bunch of time looking through my pull requests, tickets, launch emails, design documents, and more. I always end up finding small (and sometimes not-so-small) things that I completely forgot I did, like:
* mentored an intern 5 months ago
* did a small-but-important security project
* spent a few weeks helping get an important migration over the line
* helped X put together this design doc
* etcetera!
### your manager doesnt remember everything you did
And if you dont remember everything important you did, your manager (no matter how great they are!) probably doesnt either. And they need to explain to other people why you should be promoted or given an evaluation like “exceeds expectations” (“Xs work is so awesome!!!!” doesnt fly).
So if your manager is going to effectively advocate for you, they need help.
### heres the tactic: write a document listing your accomplishments
The tactic is pretty simple! Instead of trying to remember everything you did with your brain, maintain a “brag document” that lists everything so you can refer to it when you get to performance review season! This is a pretty common tactic when I started doing this I mentioned it to more experienced people and they were like “oh yeah, Ive been doing that for a long time, it really helps”.
Where I work we call this a “brag document” but Ive heard other names for the same concept like “hype document” or “list of stuff I did” :).
Theres a basic template for a brag document at the end of this post.
### share your brag document with your manager
When I first wrote a brag document I was kind of nervous about sharing it with my manager. It felt weird to be like “hey, uh, look at all the awesome stuff I did this year, I wrote a long document listing everything”. But my manager was really thankful for it I think his perspective was “this makes my job way easier, now I can look at the document when writing your perf review instead of trying to remember what happened”.
Giving them a document that explains your accomplishments will really help your manager advocate for you in discussions about your performance and come to any meetings they need to have prepared.
Brag documents also **really** help with manager transitions if you get a new manager 3 months before an important performance review that you want to do well on, giving them a brag document outlining your most important work &amp; its impact will help them understand what youve been doing even though they may not have been aware of any of your work before.
### share it with your peer reviewers
Similarly, if your company does peer feedback as part of the promotion/perf process share your brag document with your peer reviewers!! Every time someone shares their doc with me I find it SO HELPFUL with writing their review for much the same reasons its helpful to share it with your manager it reminds me of all the amazing things they did, and when they list their goals in their brag document it also helps me see what areas they might be most interested in feedback on.
On some teams at work its a team norm to share a brag document with peer reviewers to make it easier for them.
### explain the big picture
In addition to just listing accomplishments, in your brag document you can write the narrative explaining the big picture of your work. Have you been really focused on security? On building your product skills &amp; having really good relationships with your users? On building a strong culture of code review on the team?
In my brag document, I like to do this by making a section for areas that Ive been focused on (like “security”) and listing all the work Ive done in that area there. This is especially good if youre working on something fuzzy like “building a stronger culture of code review” where all the individual actions you do towards that might be relatively small and there isnt a big shiny ship.
### use your brag document to notice patterns
In the past Ive found the brag document useful not just to hype my accomplishments, but also to reflect on the work Ive done. Some questions its helped me with:
* What work do I feel most proud of?
* Are there themes in these projects I should be thinking about? Whats the big picture of what Im working on? (am I working a lot on security? localization?).
* What do I wish I was doing more / less of?
* Which of my projects had the effect I wanted, and which didnt? Why might that have been?
* What could have gone better with project X? What might I want to do differently next time?
### you can write it all at once or update it every 2 weeks
Many people have told me that it works best for them if they take a few minutes to update their brag document every 2 weeks ago. For me it actually works better to do a single marathon session every 6 months or every year where I look through everything I did and reflect on it all at once. Try out different approaches and see what works for you!
### dont forget to include the fuzzy work
A lot of us work on fuzzy projects that can feel hard to quantify, like:
* improving code quality on the team / making code reviews a little more in depth
* making on call easier
* building a more fair interview process / performance review system
* refactoring / driving down technical debt
A lot of people will leave this kind of work out because they dont know how to explain why its important. But I think this kind of work is especially important to put into your brag document because its the most likely to fall under the radar! One way to approach this is to, for each goal:
1. explain your goal for the work (why do you think its important to refactor X piece of code?)
2. list some things youve done towards that goal
3. list any effects youve seen of the work, even if theyre a little indirect
If you tell your coworkers this kind of work is important to you and tell them what youve been doing, maybe they can also give you ideas about how to do it more effectively or make the effects of that work more obvious!
### encourage each other to celebrate accomplishments
One nice side effect of having a shared idea that its normal/good to maintain a brag document at work is that I sometimes see people encouraging each other to record &amp; celebrate their accomplishments (“hey, you should put that in your brag doc, that was really good!”). It can be hard to see the value of your work sometimes, especially when youre working on something hard, and an outside perspective from a friend or colleague can really help you see why what youre doing is important.
Brag documents are good when you use them on your own to advocate for yourself, but I think theyre better as a collaborative effort to recognize where people are excelling.
Next, I want to talk about a couple of structures that weve used to help people recognize their accomplishments.
### the brag workshop: help people list their accomplishments
The way this “brag document” practice started in the first place is that my coworker [Karla][2] and I wanted to help other women in engineering advocate for themselves more in the performance review process. The idea is that some people undersell their accomplishments more than they should, so we wanted to encourage those people to “brag” a little bit and write down what they did that was important.
We did this by running a “brag workshop” just before performance review season. The format of the workshop is like this:
**Part 1: write the document: 1-2 hours**. Everybody sits down with their laptop, starts looking through their pull requests, tickets they resolved, design docs, etc, and puts together a list of important things they did in the last 6 months.
**Part 2: pair up and make the impact of your work clearer: 1 hour**. The goal of this part is to pair up, review each others documents, and identify places where people havent bragged “enough” maybe they worked on an extremely critical project to the company but didnt highlight how important it was, maybe they improved test performance but didnt say that they made the tests 3 times faster and that it improved everyones developer experience. Its easy to accidentally write “I shipped $feature” and miss the follow up (“… which caused $thing to happen”). Another person reading through your document can help you catch the places where you need to clarify the impact.
### biweekly brag document writing session
Another approach to helping people remember their accomplishments: my friend Dave gets some friends together every couple of weeks or so for everyone to update their brag documents. Its a nice way for people to talk about work that theyre happy about &amp; celebrate it a little bit, and updating your brag document as you go can be easier than trying to remember everything you did all at once at the end of the year.
These dont have to be people in the same company or even in the same city that group meets over video chat and has people from many different companies doing this together from Portland, Toronto, New York, and Montreal.
In general, especially if youre someone who really cares about your work, I think its really positive to share your goals &amp; accomplishments (and the things that havent gone so well too!) with your friends and coworkers. It makes it feel less like youre working alone and more like everyone is supporting each other in helping them accomplish what they want.
### thanks
Thanks to Karla Burnett who I worked with on spreading this idea at work, to Dave Vasilevsky for running brag doc writing sessions, to Will Larson who encouraged me to start one [of these][3] in the first place, to my manager Jay Shirley for always being encouraging &amp; showing me that this is a useful way to work with a manager, and to Allie, Dan, Laura, Julian, Kamal, Stanley, and Vaibhav for reading a draft of this.
Id also recommend the blog post [Hype Yourself! Youre Worth It!][4] by Aashni Shah which talks about a similar approach.
## Appendix: brag document template
Heres a template for a brag document! Usually I make one brag document per year. (“Julias 2017 brag document”). I think its okay to make it quite long / comprehensive 5-10 pages or more for a year of work doesnt seem like too much to me, especially if youre including some graphs/charts / screenshots to show the effects of what you did.
One thing I want to emphasize, for people who dont like to brag, is **you dont have to try to make your work sound better than it is**. Just make it sound **exactly as good as it is**! For example “was the primary contributor to X new feature thats now used by 60% of our customers and has gotten Y positive feedback”.
### Goals for this year:
* List your major goals here! Sharing your goals with your manager &amp; coworkers is really nice because it helps them see how they can support you in accomplishing those goals!
### Goals for next year
* If its getting towards the end of the year, maybe start writing down what you think your goals for next year might be.
### Projects
For each one, go through:
* What your contributions were (did you come up with the design? Which components did you build? Was there some useful insight like “wait, we can cut scope and do what we want by doing way less work” that you came up with?)
* The impact of the project who was it for? Are there numbers you can attach to it? (saved X dollars? shipped new feature that has helped sell Y big deals? Improved performance by X%? Used by X internal users every day?). Did it support some important non-numeric company goal (required to pass an audit? helped retain an important user?)
Remember: dont forget to explain what the results of you work actually were! Its often important to go back a few months later and fill in what actually happened after you launched the project.
### Collaboration &amp; mentorship
Examples of things in this category:
* Helping others in an area youre an expert in (like “other engineers regularly ask me for one-off help solving weird bugs in their CSS” or “quoting from the C standard at just the right moment”)
* Mentoring interns / helping new team members get started
* Writing really clear emails/meeting notes
* Foundational code that other people built on top of
* Improving monitoring / dashboards / on call
* Any code review that you spent a particularly long time on / that you think was especially important
* Important questions you answered (“helped Risha from OTHER_TEAM with a lot of questions related to Y”)
* Mentoring someone on a project (“gave Ben advice from time to time on leading his first big project”)
* Giving an internal talk or workshop
### Design &amp; documentation
List design docs &amp; documentation that you worked on
* Design docs: I usually just say “wrote design for X” or “reviewed design for X”
* Documentation: maybe briefly explain the goal behind this documentation (for example “we were getting a lot of questions about X, so I documented it and now we can answer the questions more quickly”)
### Company building
This is a category we have at work it basically means “things you did to help the company overall, not just your project / team”. Some things that go in here:
* Going above &amp; beyond with interviewing or recruiting (doing campus recruiting, etc)
* Improving important processes, like the interview process or writing better onboarding materials
### What you learned
My friend Julian suggested this section and I think its a great idea try listing important things you learned or skills youve acquired recently! Some examples of skills you might be learning or improving:
* how to do performance analysis &amp; make code run faster
* internals of an important piece of software (like the JVM or Postgres or Linux)
* how to use a library (like React)
* how to use an important tool (like the command line or Firefox dev tools)
* about a specific area of programming (like localization or timezones)
* an area like product management / UX design
* how to write a clear design doc
* a new programming language
Its really easy to lose track of what skills youre learning, and usually when I reflect on this I realize I learned a lot more than I thought and also notice things that Im _not_ learning that I wish I was.
### Outside of work
Its also often useful to track accomplishments outside of work, like:
* blog posts
* talks/panels
* open source work
* Industry recognition
I think this can be a nice way to highlight how youre thinking about your career outside of strictly what youre doing at work.
This can also include other non-career-related things youre proud of, if that feels good to you! Some people like to keep a combined personal + work brag document.
### General prompts
If youre feeling stuck for things to mention, try:
* If you were trying to convince a friend to come join your company/team, what would you tell them about your work?
* Did anybody tell you you did something well recently?
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/brag-documents/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: tmp.nd0Dg3RXQE#template
[2]: https://karla.io/
[3]: https://lethain.com/career-narratives/
[4]: http://blog.aashni.me/2019/01/hype-yourself-youre-worth-it/

View File

@ -0,0 +1,508 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Learn how to Install LXD / LXC Containers in Ubuntu)
[#]: via: (https://www.linuxtechi.com/install-lxd-lxc-containers-from-scratch/)
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
Learn how to Install LXD / LXC Containers in Ubuntu
======
Let me start by explaining what a container is, it is normal process on the host machine (any Linux based m/c) with following characteristics,
* It feels like a VM, but it is not.
* Uses the host Kernel.
* Cannot boot a different Operating System.
* Cant have its own modules.
* Does not need “**init”** as PID (Process id) as “1”
[![Learn-LXD-LXC-Containers][1]][2]
LXC (**LinuX Containers**) technology was developed long ago and is an Operating System level virtualization technology. This was existing from the days of BSD and System-V Release 4 (Popular Unix flavors during 1980-90s). But until recently, no one new how much it can help us in saving in terms of resource utilization. Because of this technology change, all enterprises are moving towards adoption of virtualization (be it Cloud or be it Docker containers). This also helped in better management of **OpEX(Operational expenditures)** and **CaPEX(Captial expenditures)** costs. Using this technique, we can create and run multiple and isolated Linux virtual environments on a single Linux host machine (called control host). LXC mainly uses Linuxs cgroups and namespaces functionalities, which were introduced in version 2.6.24(kernel version) onwards. In parallel many advancements in hypervisors happened like that of **KVM**, **QEMU**, **Hyper-V**, **ESXi** etc. Especially KVM (Kernel Virtual Machine) which is core of Linux OS, helped in this kind of advancement.
Difference between LXC and LXD is that LXC is the original and older way to manage containers but it is still supported, all commands of LXC starts with “**lxc-“** like “**lxc-create**” &amp; “**lxc-info**“, whereas LXD is a new way to manage containers and lxc command is used for all containers operations and management.
All of us know that “**Docker**” utilizes LXC and was developed using Go language, cgroups, namespaces and finally the Linux Kernel itself. Complete Docker has been built and developed using LXC as the basic foundation block. Docker is completely dependent on underlying infrastructure &amp; hardware and using the Operating System as the medium. However, Docker is a portable and easily deployable container engine; all its dependencies are run using a virtual container on most of the Linux based servers. Groups, and Namespaces are the building block concepts for both LXC and Docker containers. Following are the brief description of these concepts.
### C Groups (Control Groups)
With Cgroups each resource will have its own hierarchy.
* CPU, Memory, I/O etc will have their own control group hierarchy. Following are various characterics of Cgroups,
*  Each process is in each node
* Each hierarchy starts with one node
* Initially all processes start at the root node. Therefore “each node” is equivalent to “group of processes”.
* Hierarchies are independent, ex: CPU, Block I/O, memory etc
As explained earlier there are various Cgroup types as listed below,
1) **Memory Cgroups**
a) Keeps track of pages used by each group.
b) File read/write/mmap from block devices
c) Anonymous memory(stack, heap etc)
d) Each memory page is charged to a group
e) Pages can be shared across multiple groups
2) **CPU Cgroups**
a) Track users/system cpu time
b)  Track usage per CPU
c) Allows set to weights
d) Cant set cpu limits
3) **Block IO Cgroup**
a) Keep track of read/write(I/Os)
b) Set throttle (limits) for each group (per block device)
c) Set real weights for each group (per block device)
4) **Devices Cgroup**
a) Controls what the group can do on device nodes
b) Permission include /read/write/mknode
5) **Freezer Cgroup**
a) Allow to freeze/thaw a group  of processes
b) Similar to SIGSTOP/SIGCONT
c) Cannot be detected by processes
### NameSpaces
Namespaces provide processes with their own system view. Each process is in name space of each type.
There are multiple namespaces like,
* PID Process within a PID name space only see processes in the same PID name space
* Net Processes within a given network namespace get their own private network stack.
* Mnt Processes can have their own “root” and private “mount” points.
* Uts   Have container its own hostname
* IPC Allows processes to have own IPC semaphores, IPC message queues and shared memory
* USR Allows to map UID/GID
### Installation and configuration of LXD containers
To have LXD installed on Ubuntu system (18.04 LTS) , we can start with LXD installation using below apt command
```
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt install lxd -y
```
Once the LXD is installed, we can start with its initialization as below, (most of the times use the default options)
```
root@linuxtechi:~$ sudo lxd init
```
![lxc-init-ubuntu-system][1]
Once the LXD is initialized successfully, run the below command to verify information
```
root@linuxtechi:~$ sudo lxc info | more
```
![lxc-info-command][1]
Use below command to list if there is any container is downloaded on our host,
```
root@linuxtechi:~$ sudo lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
root@linuxtechi:~$
```
Quick and easy way to start the first container on Ubuntu 18.04 (or any supported Ubuntu flavor) use the following command. The container name we have provided is “shashi”
```
root@linuxtechi:~$ sudo lxc launch ubuntu:18.04 shashi
Creating shashi
Starting shashi
root@linuxtechi:~$
```
To list out what are the LXC containers that are in the system
```
root@linuxtechi:~$ sudo lxc list
+--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| shashi | RUNNING | 10.122.140.140 (eth0) | fd42:49da:7c44:cebe:216:3eff:fea4:ea06 (eth0) | PERSISTENT | 0 |
+--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
root@linuxtechi:~$
```
Other Container management commands for LXD are listed below :
**Note:** In below examples, shashi is my container name
**How to take bash shell of your LXD Container?**
```
root@linuxtechi:~$ sudo lxc exec shashi bash
root@linuxtechi:~#
```
**How Stop, Start &amp; Restart LXD Container?**
```
root@linuxtechi:~$ sudo lxc stop shashi
root@linuxtechi:~$ sudo lxc list
+--------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+------+------+------------+-----------+
| shashi | STOPPED | | | PERSISTENT | 0 |
+--------+---------+------+------+------------+-----------+
root@linuxtechi:~$
root@linuxtechi:~$ sudo lxc start shashi
root@linuxtechi:~$ sudo lxc restart shashi
```
**How to delete a LXD Container?**
```
root@linuxtechi:~$ sudo lxc stop shashi
root@linuxtechi:~$ sudo lxc delete shashi
root@linuxtechi:~$ sudo lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
root@linuxtechi:~$
```
**How to take snapshot of LXD container and then restore it?**
Lets assume we have pkumar container based on centos7 image, so to take the snapshot use the following,
```
root@linuxtechi:~$ sudo lxc snapshot pkumar pkumar_snap0
```
Use below command to verify the snapshot
```
root@linuxtechi:~$ sudo lxc info pkumar | grep -i Snapshots -A2
Snapshots:
pkumar_snap0 (taken at 2019/08/02 19:39 UTC) (stateless)
root@linuxtechi:~$
```
Use below command to restore the LXD container from their snapshot
Syntax:
$ lxc restore {container_name} {snapshot_name}
```
root@linuxtechi:~$ sudo lxc restore pkumar pkumar_snap0
root@linuxtechi:~$
```
**How to delete LXD container snapshot?**
```
$ sudo lxc delete <container_name/snapshot_name>
```
**How to set Memory, CPU and Disk Limit on LXD container?**
Syntax to set Memory limit:
# lxc config set &lt;container_name&gt; limits.memory &lt;Memory_Size&gt;KB/MB/GB
Syntax to set CPU limit:
# lxc config set &lt;container_name&gt;  limits.cpu {Number_of_CPUs}
Syntax to Set Disk limit:
# lxc config device set &lt;container_name&gt; root size &lt;Size_MB/GB&gt;
**Note:** To set a disk limit (it requires btrfs or ZFS filesystem)
Lets set limit on Memory and CPU on container shashi using the following commands,
```
root@linuxtechi:~$ sudo lxc config set shashi limits.memory 256MB
root@linuxtechi:~$ sudo lxc config set shashi limits.cpu 2
```
### Install and configure LXC container (commands and operations)
To install lxc on your ubuntu system, use the beneath apt command,
```
root@linuxtechi:~$ sudo apt install lxc -y
```
In earlier version of LXC, the command “**lxc-clone**” was used and later it was deprecated. Now, “**lxc-copy**” command is widely used for cloning operation.
**Note:** To get “lxc-copy” command working, use the following installation steps,
```
root@linuxtechi:~$ sudo apt install lxc1 -y
```
**Creating Linux Containers using the templates**
LXC provides ready-made templates for easy installation of Linux containers. Templates are usually found in the directory path /usr/share/lxc/templates, but in fresh installation we will not get the templates, so to download the templates in your local system , run the beneath command,
```
root@linuxtechi:~$ sudo apt install lxc-templates -y
```
Once the lxc-templates are installed successfully then templates will be available,
```
root@linuxtechi:~$ sudo ls /usr/share/lxc/templates/
lxc-alpine lxc-centos lxc-fedora lxc-oci lxc-plamo lxc-sparclinux lxc-voidlinux
lxc-altlinux lxc-cirros lxc-fedora-legacy lxc-openmandriva lxc-pld lxc-sshd
lxc-archlinux lxc-debian lxc-gentoo lxc-opensuse lxc-sabayon lxc-ubuntu
lxc-busybox lxc-download lxc-local lxc-oracle lxc-slackware lxc-ubuntu-cloud
root@linuxtechi:~$
```
Lets Launch a container using template,
Syntax: lxc-create -n &lt;container_name&gt; lxc -t &lt;template_name&gt;
```
root@linuxtechi:~$ sudo lxc-create -n shashi_lxc -t ubuntu
………………………
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Current default time zone: 'Etc/UTC'
Local time is now: Fri Aug 2 11:46:42 UTC 2019.
Universal Time is now: Fri Aug 2 11:46:42 UTC 2019.
##
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
##
………………………………………
root@linuxtechi:~$
```
Once the complete template is created, we can login into this console using the following steps
```
root@linuxtechi:~$ sudo lxc-start -n shashi_lxc -d
root@linuxtechi:~$ sudo lxc-console -n shashi_lxc
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Ubuntu 18.04.2 LTS shashi_lxc pts/0
shashi_lxc login: ubuntu
Password:
Last login: Fri Aug 2 12:00:35 UTC 2019 on pts/0
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-20-generic x86_64)
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
root@linuxtechi_lxc:~$ free -h
total used free shared buff/cache available
Mem: 3.9G 23M 3.8G 112K 8.7M 3.8G
Swap: 1.9G 780K 1.9G
root@linuxtechi_lxc:~$ grep -c processor /proc/cpuinfo
1
root@linuxtechi_lxc:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 7.4G 31G 20% /
root@linuxtechi_lxc:~$
```
Now logout or exit from the container and go back to host machine login window. With the lxc-ls command we can see that shashi-lxc container is created.
```
root@linuxtechi:~$ sudo lxc-ls
shashi_lxc
root@linuxtechi:~$
```
“**lxc-ls -f**” command provides details with ip address of the container and the same is as below,
```
root@linuxtechi:~$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
shashi_lxc RUNNING 0 - 10.0.3.190 - false
root@linuxtechi:~$
```
“**lxc-info -n &lt;container_name&gt;**” command provides with all the required details along with State, ip address etc.
```
root@linuxtechi:~$ sudo lxc-info -n shashi_lxc
Name: shashi_lxc
State: RUNNING
PID: 6732
IP: 10.0.3.190
CPU use: 2.38 seconds
BlkIO use: 240.00 KiB
Memory use: 27.75 MiB
KMem use: 5.04 MiB
Link: vethQ7BVGU
TX bytes: 2.01 KiB
RX bytes: 9.52 KiB
Total bytes: 11.53 KiB
root@linuxtechi:~$
```
**How to Start, Stop, Restart and Delete LXC containers**
```
$ lxc-start -n <container_name>
$ lxc-stop -n <container_name>
$ lxc-destroy -n <container_name>
```
**LXC Cloning operation**
Now the main cloning operation to be performed on the LXC container. The following steps are followed
As described earlier LXC offers a feature of cloning a container from the existing container, by running the following command to clone an existing “shashi_lxc” container to a new container “shashi_lxc_clone”.
**Note:** We have to make sure that before starting the cloning operation, first we have to stop the existing container using the “**lxc-stop**” command.
```
root@linuxtechi:~$ sudo lxc-stop -n shashi_lxc
root@linuxtechi:~$ sudo lxc-copy -n shashi_lxc -N shashi_lxc_clone
root@linuxtechi:~$ sudo lxc-ls
shashi_lxc shashi_lxc_clone
root@linuxtechi:~$
```
Now start the cloned container
```
root@linuxtechi:~$ sudo lxc-start -n shashi_lxc_clone
root@linuxtechi:~$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
shashi_lxc STOPPED 0 - - - false
shashi_lxc_clone RUNNING 0 - 10.0.3.201 - false
root@linuxtechi:~$
```
With the above set of commands, cloning operation is done and the new clone “shashi_lxc_clone” got created. We can login into this lxc container console with below steps,
```
root@linuxtechi:~$ sudo lxc-console -n shashi_lxc_clone
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Ubuntu 18.04.2 LTS shashi_lxc pts/0
shashi_lxc login:
```
**LXC Network configuration and commands**
We can attach to the newly created container, but to remotely login into this container using SSH or any other means, we have to do some minimal configuration changes as explained below,
```
root@linuxtechi:~$ sudo lxc-attach -n shashi_lxc_clone
root@linuxtechi_lxc:/#
root@linuxtechi_lxc:/# useradd -m shashi
root@linuxtechi_lxc:/# passwd shashi
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root@linuxtechi_lxc:/#
```
First install the ssh server using the following command so that smooth “ssh” connect can be established.
```
root@linuxtechi_lxc:/# apt install openssh-server -y
```
Now get the IP address of the existing lxc container using the following command,
```
root@linuxtechi_lxc:/# ip addr show eth0|grep inet
inet 10.0.3.201/24 brd 10.0.3.255 scope global dynamic eth0
inet6 fe80::216:3eff:fe82:e251/64 scope link
root@linuxtechi_lxc:/#
```
From the host machine with a new console window, use the following command to connect to this container over ssh
```
root@linuxtechi:~$ ssh 10.0.3.201
root@linuxtechi's password:
$
```
Now, we have logged in a container using ssh session.
**LXC process related commands**
```
root@linuxtechi:~$ ps aux|grep lxc|grep -v grep
```
![lxc-process-ubuntu-system][1]
**LXC snapshot operation**
Snapshotting is one of the main operations which will help in taking point in time snapshot of the lxc container images. These same snapshot images can be used later for further use.
```
root@linuxtechi:~$ sudo lxc-stop -n shashi_lxc
root@linuxtechi:~$ sudo lxc-snapshot -n shashi_lxc
root@linuxtechi:~$
```
The snapshot path can be located using the following command.
```
root@linuxtechi:~$ sudo lxc-snapshot -L -n shashi_lxc
snap0 (/var/lib/lxc/shashi_lxc/snaps) 2019:08:02 20:28:49
root@linuxtechi:~$
```
**Conclusion:**
LXC, LinuX containers is one of the early container technologies. Understanding the concepts and learning about LXC will help in deeper understanding of any other containers like Docker Containers. This article has provided deeper insights on Cgroup and Namespaces which are also very much required concepts for better understanding of Containers and like. Many of the LXC operations like cloning, snapshotting, network operation etc are covered with command line examples.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-lxd-lxc-containers-from-scratch/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/shashidhar/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/08/Learn-LXD-LXC-Containers.jpg

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (curl exercises)
[#]: via: (https://jvns.ca/blog/2019/08/27/curl-exercises/)
[#]: author: (Julia Evans https://jvns.ca/)
curl exercises
======
Recently Ive been interested in how people learn things. I was reading Kathy Sierras great book [Badass: Making Users Awesome][1]. It talks about the idea of _deliberate practice_.
The idea is that you find a small micro-skill that can be learned in maybe 3 sessions of 45 minutes, and focus on learning that micro-skill. So, as an exercise, I was trying to think of a computer skill that I thought could be learned in 3 45-minute sessions.
I thought that making HTTP requests with `curl` might be a skill like that, so here are some curl exercises as an experiment!
### whats curl?
curl is a command line tool for making HTTP requests. I like it because its an easy way to test that servers or APIs are doing what I think, but its a little confusing at first!
Heres a drawing explaining curls most important command line arguments (which is page 6 of my [Bite Size Networking][2] zine). You can click to make it bigger.
<https://jvns.ca/images/curl.jpeg>
### fluency is valuable
With any command line tool, I think having fluency is really helpful. Its really nice to be able to just type in the thing you need. For example recently I was testing out the Gumroad API and I was able to just type in:
```
curl https://api.gumroad.com/v2/sales \
-d "access_token=<SECRET>" \
-X GET -d "before=2016-09-03"
```
and get things working from the command line.
### 21 curl exercises
These exercises are about understanding how to make different kinds of HTTP requests with curl. Theyre a little repetitive on purpose. They exercise basically everything I do with curl.
To keep it simple, were going to make a lot of our requests to the same website: <https://httpbin.org>. httpbin is a service that accepts HTTP requests and then tells you what request you made.
1. Request <https://httpbin.org>
2. Request <https://httpbin.org/anything>. httpbin.org/anything will look at the request you made, parse it, and echo back to you what you requested. curls default is to make a GET request.
3. Make a POST request to <https://httpbin.org/anything>
4. Make a GET request to <https://httpbin.org/anything>, but this time add some query parameters (set `value=panda`).
5. Request googles robots.txt file ([www.google.com/robots.txt][3])
6. Make a GET request to <https://httpbin.org/anything> and set the header `User-Agent: elephant`.
7. Make a DELETE request to <https://httpbin.org/anything>
8. Request <https://httpbin.org/anything> and also get the response headers
9. Make a POST request to <https://httpbin.com/anything> with the JSON body `{"value": "panda"}`
10. Make the same POST request as the previous exercise, but set the Content-Type header to `application/json` (because POST requests need to have a content type that matches their body). Look at the `json` field in the response to see the difference from the previous one.
11. Make a GET request to <https://httpbin.org/anything> and set the header `Accept-Encoding: gzip` (what happens? why?)
12. Put a bunch of a JSON in a file and then make a POST request to <https://httpbin.org/anything> with the JSON in that file as the body
13. Make a request to <https://httpbin.org/image> and set the header Accept: image/png. Save the output to a PNG file and open the file in an image viewer. Try the same thing with with different `Accept:` headers.
14. Make a PUT request to <https://httpbin.org/anything>
15. Request <https://httpbin.org/image/jpeg>, save it to a file, and open that file in your image editor.
16. Request <https://www.twitter.com>. Youll get an empty response. Get curl to show you the response headers too, and try to figure out why the response was empty.
17. Make any request to <https://httpbin.org/anything> and just set some nonsense headers (like `panda: elephant`)
18. Request <https://httpbin.org/status/404> and <https://httpbin.org/status/200>. Request them again and get curl to show the response headers.
19. Request <https://httpbin.org/anything> and set a username and password (with `-u username:password`)
20. Download the Twitter homepage (<https://twitter.com>) in Spanish by setting the `Accept-Language: es-ES` header.
21. Make a request to the Stripe API with curl. (see <https://stripe.com/docs/development> for how, they give you a test API key). Try making exactly the same request to <https://httpbin.org/anything>.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/27/curl-exercises/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Badass-Making-Awesome-Kathy-Sierra/dp/1491919019
[2]: https://wizardzines.com/zines/bite-size-networking
[3]: http://www.google.com/robots.txt

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (git exercises: navigate a repository)
[#]: via: (https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/)
[#]: author: (Julia Evans https://jvns.ca/)
git exercises: navigate a repository
======
I think the [curl exercises][1] the other day went well, so today I woke up and wanted to try writing some Git exercises. Git is a big thing to learn, probably too big to learn in a few hours, so my first idea for how to break it down was by starting by **navigating** a repository.
I was originally going to use a toy test repository, but then I thought why not a real repository? Thats way more fun! So were going to navigate the repository for the Ruby programming language. You dont need to know any C to do this exercise, its just about getting comfortable with looking at how files in a repository change over time.
### clone the repository
To get started, clone the repository:
```
git clone https://github.com/ruby/ruby
```
The big different thing about this repository (as compared to most of the repositories youll work with in real life) is that it doesnt have branches, but it DOES have lots of tags, which are similar to branches in that theyre both just pointers to a commit. So well do exercises with tags instead of branches. The way you _change_ tags and branches are very different, but the way you _look at_ tags and branches is exactly the same.
### a git SHA always refers to the same code
The most important thing to keep in mind while doing these exercises is that a git SHA like `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` always refers to the same code, as explained in this page. This page is from a zine I wrote with Katie Sylor-Miller called [Oh shit, git!][2]. (She also has a great site called <https://ohshitgit.com/> that inspired the zine).
<https://wizardzines.com/zines/oh-shit-git/samples/ohshit-commit.png>
Well be using git SHAs really heavily in the exercises to get you used to working with them and to help understand how they correspond to tags and branches.
### git subcommands well be using
All of these exercises only use 5 git subcommands:
```
git checkout
git log (--oneline, --author, and -S will be useful)
git diff (--stat will be useful)
git show
git status
```
### exercises
1. Check out matzs commit of Ruby from 1998. The commit ID is `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`. Find out how many lines of code Ruby was at that time.
2. Check out the current master branch
3. Look at the history for the file `hash.c`. What was the last commit ID that changed that file?
4. Get a diff of how `hash.c` has changed in the last 20ish years: compare that file on the master branch to the file at commit `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`.
5. Find a recent commit that changed `hash.c` and look at the diff for that commit
6. This repository has a bunch of **tags** for every Ruby release. Get a list of all the tags.
7. Find out how many files changed between tag `v1_8_6_187` and tag `v1_8_6_188`
8. Find a commit (any commit) from 2015 and check it out, look at the files very briefly, then go back to the master branch.
9. Find out what commit the tag `v1_8_6_187` corresponds to.
10. List the directory `.git/refs/tags`. Run `cat .git/refs/tags/v1_8_6_187` to see the contents of one of those files.
11. Find out what commit ID `HEAD` corresponds to right now.
12. Find out how many commits have been made to the `test/` directory
13. Get a diff of `lib/telnet.rb` between the commits `65a5162550f58047974793cdc8067a970b2435c0` and `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71`. How many lines of that file were changed?
14. How many commits were made between Ruby 2.5.1 and 2.5.2 (tags `v2_5_1` and `v2_5_3`) (this one is a tiny bit tricky, theres more than one step)
15. How many commits were authored by `matz` (Rubys creator)?
16. Whats the most recent commit that included the word `tkutil`?
17. Check out the commit `e51dca2596db9567bd4d698b18b4d300575d3881` and create a new branch that points at that commit.
18. Run `git reflog` to see all the navigating of the repository youve done so far
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2019/08/27/curl-exercises/
[2]: https://wizardzines.com/zines/oh-shit-git/

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to write zines with simple tools)
[#]: via: (https://jvns.ca/blog/2019/09/01/ways-to-write-zines-without-fancy-tools/)
[#]: author: (Julia Evans https://jvns.ca/)
How to write zines with simple tools
======
People often ask me what tools I use to write my zines ([the answer is here][1]). Answering this question as written has always felt slightly off to me, though, and I couldnt figure out why for a long time.
I finally realized last week that instead of “what tools do you use to write zines?” some people may have actually wanted to know “how can I do this myself?”! And “buy a $500 iPad” is not a terribly useful answer to that question its not how I got started, iPads are kind of a weird fancy way to write zines, and most people dont have them.
So this blog post is about more traditional (and easier to get started with) ways to write zines.
Were going to start out by talking about the mechanics of how to write the zine, and then talk about how to assemble it into a booklet.
### Way 1: Write it on paper
This is how I made my first zine (spying on your programs with strace) which you can see here: <https://jvns.ca/strace-zine-unfolded.pdf>.
Heres an example of a page I drew on paper this morning pretty quickly. It looks kind of bad because I scanned it with my phone, but if you use a real scanner (like I did with the strace PDF above), the scanned version comes out better.
<https://jvns.ca/images/drawing-status-codes.png>
### Way 2: Use a Google doc
The next option is to use a Google doc (or whatever other word processor you prefer). [Heres the Google doc I wrote for the below image][2], and heres what it looks like:
<https://jvns.ca/images/docs-status-codes.png>
They key thing about this Google doc approach is to apply some “less is more”. Its intended to be printed as part of a booklet on **half** a sheet of letter paper, which means everything needs to be twice as big for it to look good.
### Way 3: Use an iPad
This is what I do (use the Notability app on iPad). Im not going to talk about this method much because this post is about using more readily available tools.
<https://jvns.ca/images/ipad-status-codes.png>
### Way 4: Use a single sheet of paper
This is a subset of “Write it on paper” the [Wikibooks page on zine making][3] has a great guide that shows how to write out a tiny zine on 1 piece of paper and then fold it up to make a little booklet. Here are the pictures of the steps from the Wikibooks page:
<https://jvns.ca/images/Zinemaking-folding-8cut-plan.png> <https://jvns.ca/images/Zinemaking-folding-8cut-1.png> <https://jvns.ca/images/Zinemaking-folding-8cut-2.png> <https://jvns.ca/images/Zinemaking-folding-8cut-3.png> <https://jvns.ca/images/Zinemaking-folding-8cut-4.png> <https://jvns.ca/images/Zinemaking-folding-8cut-5.png> <https://jvns.ca/images/Zinemaking-folding-8cut-6.png> <https://jvns.ca/images/Zinemaking-folding-8cut-7.png>
Sumana Harihareswaras [Playing with python][4] zine is a nice example of a zine thats intended to be folded up in that way.
### Way 5: Adobe Illustrator
Ive never used Adobe Illustrator so Im not going to pretend that I know anything about it or put together an example using it, but I hear its a way people do book layout.
### booklets: the photocopier method
So youve written a bunch of pages and want to assemble them into a booklet. One way to do this (and what I did for my first zine about strace!) is the photocopier method. Theres a great guide by Julia Gfrörer in [this tweet][5], which Im going to reproduce here:
![][6]
![][7]
![][8]
![][9]
That explanation is excellent and I dont have anything to add. I did it that way and it worked great.
If you want to buy a print copy of that how-to-make-zines zine from Thruban Press, you can [get it here on Etsy][10].
### booklets: the computer method
If youve made your zine in Google Docs or in another computery way, you probably want a more computery way of assembling the pages into a booklet.
**what I use: pdflatex**
I do this using the `pdfpages` LaTeX extension. This sounds complicated but its not really, you dont need to learn latex or anything. You just need to have pdflatex on your system, which is a `sudo apt install texlive-base` away on Ubuntu. The steps are:
1. Get a PDF with the pages from your zine (pages need to be a multiple of 4)
2. Get the latex file from [this gist][11]
3. Replace `/home/bork/http-zine.pdf` with the path to your PDF and `1-28` with `1-however many pages are in your zine`.
4. run `pdflatex formatted-zine.tex`
5. Tweak the parameters until it looks the way you want. The [documentation for the pdfpages package is here][12]
I like using this relatively complicated method because there are always small tweaks I want to make like “oh, the right margin is too big, crop it a little bit” and the pdfpages package has tons of options that let me make those tweaks.
**other methods**
1. On Linux you can use the `pdfjam` bash script, which is just a wrapper around the pdfpages latex package. This is what I used to do but today I find it simpler to use the pdfpages latex package directly.
2. Theres a program called [Booklet Creator][13] for Mac and Windows that [@mrfb uses][14]. It looks pretty simple to use.
3. If you convert your PDF to a ps file (with `pdf2ps` for instance), `psnup` can do this. I tried `cat file.ps | psbook | psnup -2 > booklet.ps` and it worked, though the resulting PDFs are a little slow to load in my PDF viewer for some reason.
4. there are probably a ton more ways to do this, if you know more let me know
### making zines is easy and low tech
Thats all! I mostly wanted to explain that zines are an easy low tech thing to do and if you think making them sounds fun, you definitely 100% do not need to use any fancy expensive tools to do it, you can literally use some sheets of paper, a Sharpie, a pen, and spend $3 at your local print shop to use the photocopier.
### resources
summary of the resources I linked to:
* Guide to putting together zines with a photocopier by Julia Gfrörer: [this tweet][5], [get it on Etsy][10]
* [Wikibooks page on zine making][3]
* Notes on making zines using Google Docs: [this twitter thread][14]
* [Stolen Sharpie Revolution][15] (the first book I read about making zines). You can also get it on Amazon if you want but its probably better to buy directly from their site.
* [Booklet Creator][13]
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/01/ways-to-write-zines-without-fancy-tools/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/1160171769833185280
[2]: https://docs.google.com/document/d/1byzfXC0h6hNFlWXaV9peJpX-GamJOrJ70x9nu1dZ-m0/edit?usp=sharing
[3]: https://en.m.wikibooks.org/wiki/Zine_Making/Putting_pages_together
[4]: https://www.harihareswara.net/pix/playing-with-python-zine/playing-with-python-zine.pdf
[5]: https://twitter.com/thorazos/status/1158556879485906944
[6]: https://pbs.twimg.com/media/EBQFUC0X4AAPTU1?format=jpg&name=small
[7]: https://pbs.twimg.com/media/EBQFUC0XsAEBhHf?format=jpg&name=small
[8]: https://pbs.twimg.com/media/EBQFUC1XUAAKDIB?format=jpg&name=small
[9]: https://pbs.twimg.com/media/EBQFUDRX4AMkIAr?format=jpg&name=small
[10]: https://www.etsy.com/thorazos/listing/693692176/thuban-press-guide-to-analog-self?utm_source=Copy&utm_medium=ListingManager&utm_campaign=Share&utm_term=so.lmsm&share_time=1565113962419
[11]: https://gist.github.com/jvns/b3de1d658e2b44aebb485c35fb1a7a0f
[12]: http://texdoc.net/texmf-dist/doc/latex/pdfpages/pdfpages.pdf
[13]: https://www.bookletcreator.com/
[14]: https://twitter.com/mrfb/status/1159478532545888258
[15]: http://www.stolensharpierevolution.org/

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to put an HTML page on the internet)
[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
[#]: author: (Julia Evans https://jvns.ca/)
How to put an HTML page on the internet
======
One thing I love about the internet is that its SO EASY to put static HTML websites on the internet. Someone asked me today how to do it, so I thought Id write down how really quickly!
### just an HTML page
All of my sites are just static HTML and CSS. My web design skills are relatively minimal (<https://wizardzines.com> is the most complicated site Ive developed on my own), so keeping all my internet sites relatively simple means that I have some hope of being able to make changes / fix things without spending a billion hours on it.
So were going to take as minimal of an approach as possible in this blog post just one HTML page.
### the HTML page
The website were going to put on the internet is just one file, called `index.html`. You can find it at <https://github.com/jvns/website-example>, which is a Github repository with exactly one file in it.
The HTML file has some CSS in it to make it look a little less boring, which is partly copied from <https://example.com>.
### how to put the HTML page on the internet
Here are the steps:
1. sign up for a [Neocities][1] account
2. copy the index.html into the index.html in your neocities site
3. done
The index.html page above is on the internet at [julia-example-website.neocities.com][2], if you view source youll see that its the same HTML as in the github repo.
I think this is probably the simplest way to put an HTML page on the internet (and its a throwback to Geocities, which is how I made my first website in 2003) :). I also like that Neocities (like [glitch][3], which I also love) is about experimentation and learning and having fun..
### other options
This is definitely not the only easy way Github pages and Gitlab pages and Netlify will all automatically publish a site when you push to a Git repository, and theyre all very easy to use (just connect them to your github repository and youre done). I personally use the Git repository approach because not having things in Git makes me nervous I like to know what changes to my website Im actually pushing. But I think if you just want to put an HTML site on the internet for the first time and play around with HTML/CSS, Neocities is a really nice way to do it.
If you want to actually use your website for a Real Thing and not just to play around you probably want to buy a domain and link it to your website so that you can change hosting providers in the future, but that is a bit less simple.
### this is a good possible jumping off point for learning HTML
If you are a person who is comfortable editing files in a Git repository but wants to practice HTML/CSS, I think this is a fun way to put a website on the internet and play around! I really like the simplicity of it theres literally just one file, so theres no fancy extra magic to get in the way of understanding whats going on.
There are also a bunch of ways to complicate/extend this, like this blog is actually generated with [Hugo][4] which generates a bunch of HTML files which then go on the internet, but its always nice to start with the basics.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://neocities.org/
[2]: https://julia-example-website.neocities.org/
[3]: https://glitch.com
[4]: https://gohugo.io/

View File

@ -0,0 +1,197 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New zine: HTTP: Learn your browser's language!)
[#]: via: (https://jvns.ca/blog/2019/09/12/new-zine-on-http/)
[#]: author: (Julia Evans https://jvns.ca/)
New zine: HTTP: Learn your browser's language!
======
Hello! Ive released a new zine! Its called “HTTP: Learn your browsers language!”
You can get it for $12 at <https://gum.co/http-zine>. If you buy it, youll get a PDF that you can either read on your computer or print out.
Heres the cover and table of contents:
[![][1]][2] <https://jvns.ca/images/http-zine-toc.png>
### why http?
I got the idea for this zine from talking to [Marco Rogers][3] he mentioned that he thought that new web developers / mobile developers would really benefit from understanding the fundamentals of HTTP better, I thought “OOH I LOVE TALKING ABOUT HTTP”, wrote a few pages about HTTP, saw they were helping people, and decided to write a whole zine about HTTP.
HTTP is important to understand because it runs the entire web if you understand how HTTP requests and responses work, then it makes it WAY EASIER to debug why your web application isnt working properly. Caching, cookies, and a lot of web security are implemented using HTTP headers, so if you dont understand HTTP headers those things seem kind of like impenetrable magic. But actually the HTTP protocol is fundamentally pretty simple there are a lot of complicated details but the basics are pretty easy to understand.
So the goal of this zine is to teach you the basics so you can easily look up and understand the details when you need them.
### what it looks like printed out
All of my zines are best printed out (though you get a PDF you can read on your computer too!), so here are a couple of pictures of what it looks like when printed. I always ask my illustrator to make both a black and white version and a colour version of the cover so that it looks great when printed on a black and white printer.
[![][4]][2] <https://jvns.ca/images/same-origin-policy.jpeg>
(if you click on that “same origin policy” image, you can make it bigger)
The zine comes with 4 print PDFs in addition to a PDF you can just read on your computer/phone:
* letter / colour
* letter / b&amp;w
* a4 / colour
* a4 / b&amp;w
### zines for your team
You can also buy this zine for your team members at work to help them learn HTTP!
Ive been trying to get the pricing right for this for a while I used to do it based on size of company, but that didnt seem quite right because sometimes people would want to buy the zine for a small team at a big company. So Ive switched to pricing based on the number of copies you want to distribute at your company.
Heres the link: [zines for your team!][5].
### the tweets
When I started writing zines, I would just sit down, write down the things I thought were important, and be done with it.
In the last year and a half or so Ive taken a different approach instead of writing everything and then releasing it, instead I write a page at a time, post the page to Twitter, and then improve it and decide what page to write next based on the questions/comments I get on Twitter. If someone replies to the tweet and asks a question that shows that what I wrote is unclear, I can improve it! (I love getting replies on twitter asking clarifiying questions!).
Here are all the initial drafts of the pages I wrote and posted on twitter, in chronological order. Some of the pages didnt make it into the zine at all, and I needed to do a lot of editing at the end to figure out the right order and make them all work coherently together in a zine instead of being a bunch of independent tweets.
* Jul 1: [http status codes][6]
* Jul 2: [anatomy of a HTTP response][7]
* Jul 2: [POST requests][8]
* Jul 2: [an example POST request][9]
* Jul 28: [the same origin policy][10]
* Jul 28: [whats HTTP?][11]
* Jul 30: [the most important HTTP request headers][12]
* Jun 30: [anatomy of a HTTP request][13]
* Aug 4: [content delivery networks][14]
* Aug 6: [caching headers][15]
* Aug 6: [how cookies work][16]
* Aug 7: [redirects][17]
* Aug 8: [45 seconds on the Accept-Language HTTP header][18]
* Aug 9: [HTTPS: HTTP + security][19]
* Aug 9: [today in 45 second video experiments: the Range header][20]
* Aug 9: [some HTTP exercises to try][21]
* Aug 10: [some security headers][22]
* Aug 12: [using HTTP APIs][23]
* Aug 13: [whats with those headers that start with x-?][24]
* Aug 13: [important HTTP response headers][25]
* Aug 14: [HTTP request methods (part 1)][26]
* Aug 14: [HTTP request methods (part 2)][27]
* Aug 15: [how URLs work][28]
* Aug 16: [CORS][29]
* Aug 19: [why the same origin policy matters][30]
* Aug 21: [HTTP headers][31]
* Aug 24: [how to learn more about HTTP][32]
* Aug 25: [HTTP/2][33]
* Aug 27: [certificates][34]
Writing zines one tweet at a time has been really fun. I think it improves the quality a lot, because I get a ton of feedback along the way that I can use to make the zine better. There are also some experimental 45 second tiny videos in that list, which are definitely not part of the zine, but which were fun to make and which I might expand on in the future.
### examplecat.com
One tiny easter egg in the zine: I have a lot of examples of HTTP requests, and I wasnt sure for a long time what domain I should use for the examples. I used example.com a bunch, and google.com and twitter.com sometimes, but none of those felt quite right.
A couple of days before publishing the zine I finally had an epiphany my example on the cover was requesting a picture of a cat, so I registered <https://examplecat.com> which just has a single picture of a cat. It also has an ASCII cat if youre browsing in your terminal.
```
$ curl https://examplecat.com/cat.txt -i
HTTP/2 200
accept-ranges: bytes
cache-control: public, max-age=0, must-revalidate
content-length: 33
content-type: text/plain; charset=UTF-8
date: Thu, 12 Sep 2019 16:48:16 GMT
etag: "ac5affa59f554a1440043537ae973790-ssl"
strict-transport-security: max-age=31536000
age: 5
server: Netlify
x-nf-request-id: c5060abc-0399-4b44-94bf-c481e22c2b50-1772748
\ /\
) ( ')
( / )
\(__)|
```
### more zines at wizardzines.com
If youre interested in the idea of programming zines and havent seen my zines before, I have a bunch more at <https://wizardzines.com>. There are 6 free zines there:
* [so you want to be a wizard][35]
* [lets learn tcpdump!][36]
* [spying on your programs with strace][37]
* [networking! ACK!][38]
* [linux debugging tools youll love][39]
* [profiling and tracing with perf][40]
### next zine: not sure yet!
Some things Im considering for the next zine:
* debugging skills (I started writing a bunch of pages about debugging but switched gears to the HTTP zine because I got really excited about that. but debugging is my favourite thing so Id like to get this done at some point)
* gdb (a short zine in the spirit of [lets learn tcpdump][36])
* relational databases (whats up with transactions?)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/12/new-zine-on-http/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/images/http-zine-cover.png
[2]: https://gum.co/http-zine
[3]: https://twitter.com/polotek
[4]: https://jvns.ca/images/http-zine-cover.jpeg
[5]: https://wizardzines.com/zines-team/
[6]: https://twitter.com/b0rk/status/1145824140462608387
[7]: https://twitter.com/b0rk/status/1145896193077256197
[8]: https://twitter.com/b0rk/status/1146054159214567424
[9]: https://twitter.com/b0rk/status/1146065212560179202
[10]: https://twitter.com/b0rk/status/1155493682885341184
[11]: https://twitter.com/b0rk/status/1155318552129396736
[12]: https://twitter.com/b0rk/status/1156048630220017665
[13]: https://twitter.com/b0rk/status/1145362860136177664
[14]: https://twitter.com/b0rk/status/1158012032651862017
[15]: https://twitter.com/b0rk/status/1158726129508868097
[16]: https://twitter.com/b0rk/status/1158848054142873603
[17]: https://twitter.com/b0rk/status/1159163613938167808
[18]: https://twitter.com/b0rk/status/1159492669384658944
[19]: https://twitter.com/b0rk/status/1159812119099060224
[20]: https://twitter.com/b0rk/status/1159829608595804160
[21]: https://twitter.com/b0rk/status/1159839824594915335
[22]: https://twitter.com/b0rk/status/1160185182323970050
[23]: https://twitter.com/b0rk/status/1160933788949655552
[24]: https://twitter.com/b0rk/status/1161283690925834241
[25]: https://twitter.com/b0rk/status/1161262574031265793
[26]: https://twitter.com/b0rk/status/1161679906415218690
[27]: https://twitter.com/b0rk/status/1161680137865367553
[28]: https://twitter.com/b0rk/status/1161997141876903936
[29]: https://twitter.com/b0rk/status/1162392625057583104
[30]: https://twitter.com/b0rk/status/1163460967067541504
[31]: https://twitter.com/b0rk/status/1164181027469832196
[32]: https://twitter.com/b0rk/status/1165277002791829510
[33]: https://twitter.com/b0rk/status/1165623594917007362
[34]: https://twitter.com/b0rk/status/1166466933912494081
[35]: https://wizardzines.com/zines/wizard/
[36]: https://wizardzines.com/zines/tcpdump/
[37]: https://wizardzines.com/zines/strace/
[38]: https://wizardzines.com/zines/networking/
[39]: https://wizardzines.com/zines/debugging/
[40]: https://wizardzines.com/zines/perf/

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Taking a year to explain computer things)
[#]: via: (https://jvns.ca/blog/2019/09/13/a-year-explaining-computer-things/)
[#]: author: (Julia Evans https://jvns.ca/)
Taking a year to explain computer things
======
Ive been working on explaining computer things Im learning on this blog for 6 years. I wrote one of my first posts, [what does a shell even do?][1] on Sept 30, 2013. Since then, Ive written 11 zines, 370,000 words on this blog, and given 20 or so talks. So it seems like I like explaining things a lot.
### tl;dr: Im going to work on explaining computer things for a year
Heres the exciting news: I left my job a month ago and my plan is to spend the next year working on explaining computer things!
As for why Im doing this I was talking through some reasons with my friend Mat last night and he said “well, sometimes there are things you just feel compelled to do”. I think thats all there is to it :)
### what does “explain computer things” mean?
Im planning to:
1. write some more zines (maybe I can write 10 zines in a year? well see! I want to tackle both general-interest and slightly more niche topics, well see what happens).
2. work on some more interactive ways to learn things. I learn things best by trying things out and breaking them, so I want to see if I can facilitate that a little bit for other people. I started a project around this in May which has been on the backburner for a bit but which Im excited about. Hopefully Ill release it soon and then you can try it out and tell me what you think!
I say “a year” because I think I have at least a years worth of ideas and I cant predict how Ill feel after doing this for a year.
### how: run a business
I started a corporation almost exactly a year ago, and Im planning to keep running my explaining-things efforts as a business. This business has been making more than I made in my first programming job (that is, definitely enough money to live on!), which has been really surprising and great (thank you!).
some parameters of the business:
* Im not planning to hire employees or anything, itll just be me and some (awesome) freelancers. The biggest change I have in mind is that Im hoping to find a freelance editor to help me with editing.
* I also dont have any specific plans for world domination or to work 80-hour weeks. Im just going to make zines &amp; things that explain computer concepts and sell them on the internet, like Ive been doing.
* No commissions or consulting work, just building ideas I have
Its been pretty interesting to learn more about running a small business and so far I like it more than I thought I would. (except for taxes, which I like exactly as much as I thought I would)
### thats all!
Im excited to keep making explanations of computer things and to have more time to do it. This blog might change a bit away from “heres what Im learning at work these days” and towards “here are attempts at explaining things that I mostly already know”. Itll be different! Well see how it goes!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/09/13/a-year-explaining-computer-things/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2013/09/30/hacker-school-day-2-what-does-a-shell-even-do/