mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
清除文章,回收文章
@anonymone @acyanbird @name1e5s @WangYueScream
This commit is contained in:
parent
c7dc388e41
commit
60aeda8d9b
@ -1,85 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Serverless on Kubernetes, diverse automation, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/8/serverless-kubernetes-and-more)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Serverless on Kubernetes, diverse automation, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [10 tips for creating robust serverless components][2]
|
||||
|
||||
> There are some repeated patterns that we have seen after creating 20+ serverless components. We recommend that you browse through the [available component repos on GitHub][3] and check which one is close to what you’re building. Just open up the repo and check the code and see how everything fits together.
|
||||
>
|
||||
> All component code is open source, and we are striving to keep it clean, simple and easy to follow. After you look around you’ll be able to understand how our core API works, how we interact with external APIs, and how we are reusing other components.
|
||||
|
||||
**The impact**: Serverless Inc is striving to take probably the most hyped architecture early on in the hype cycle and make it usable and practical today. For serverless to truly go mainstream, producing something useful has to be as easy for a developer as "Hello world!," and these components are a step in that direction.
|
||||
|
||||
## [Kubernetes workloads in the serverless era: Architecture, platforms, and trends][4]
|
||||
|
||||
> There are many fascinating elements of the Kubernetes architecture: the containers providing common packaging, runtime and resource isolation model within its foundation; the simple control loop mechanism that monitors the actual state of components and reconciles this with the desired state; the custom resource definitions. But the true enabler for extending Kubernetes to support diverse workloads is the concept of the pod.
|
||||
>
|
||||
> A pod provides two sets of guarantees. The deployment guarantee ensures that the containers of a pod are always placed on the same node. This behavior has some useful properties such as allowing containers to communicate synchronously or asynchronously over localhost, over inter-process communication ([IPC][5]), or using the local file system.
|
||||
|
||||
**The impact**: If developer adoption of serverless architectures is largely driven by how easily they can be productive working that way, business adoption will be driven by the ability to place this trend in the operational and business context. IT decision-makers need to see a holistic picture of how serverless adds value alongside their existing investments, and operators and architects need to envision how they'll keep it all up and running.
|
||||
|
||||
## [How developers can survive the Last Mile with CodeReady Workspaces][6]
|
||||
|
||||
> Inside each cloud provider, a host of tools can address CI/CD, testing, monitoring, backing up and recovery problems. Outside of those providers, the cloud native community has been hard at work cranking out new tooling from [Prometheus][7], [Knative][8], [Envoy][9] and [Fluentd][10], to [Kubenetes][11] itself and the expanding ecosystem of Kubernetes Operators.
|
||||
>
|
||||
> Within all of those projects, cloud-based services and desktop utilities is one major gap, however: the last mile of software development is the IDE. And despite the wealth of development projects inside the community and Cloud Native Computing Foundation, it is indeed the Eclipse Foundation, as mentioned above, that has taken on this problem with a focus on the new cloud development landscape.
|
||||
|
||||
**The impact**: Increasingly complex development workflows and deployment patterns call for increasingly intelligent IDEs. While I'm sure it is possible to push a button and redeploy your microservices to a Kubernetes cluster from emacs (or vi, relax), Eclipse Che (and CodeReady Workspaces) are being built from the ground up with these types of cloud-native workflows in mind.
|
||||
|
||||
## [Automate security in increasingly complex hybrid environments][12]
|
||||
|
||||
> According to the [Information Security Forum][13]’s [Global Security Threat Outlook for 2019][14], one of the biggest IT trends to watch this year is the increasing sophistication of cybercrime and ransomware. And even as the volume of ransomware attacks is dropping, cybercriminals are finding new, more potent ways to be disruptive. An [article in TechRepublic][15] points to cryptojacking malware, which enables someone to hijack another's hardware without permission to mine cryptocurrency, as a growing threat for enterprise networks.
|
||||
>
|
||||
> To more effectively mitigate these risks, organizations could invest in automation as a component of their security plans. That’s because it takes time to investigate and resolve issues, in addition to applying controlled remediations across bare metal, virtualized systems, and cloud environments -- both private and public -- all while documenting changes.
|
||||
|
||||
**The impact**: This one is really about our ability to trust that the network service providers that we rely upon to keep our phones and smart TVs full of stutter-free streaming HD content have what they need to protect the infrastructure that makes it all possible. I for one am rooting for you!
|
||||
|
||||
## [AnsibleFest 2019 session catalog][16]
|
||||
|
||||
> 85 Ansible automation sessions over 3 days in Atlanta, Georgia
|
||||
|
||||
**The impact**: What struck me is the range of things that can be automated with Ansible. Windows? Check. Multicloud? Check. Security? Check. The real question after those three days are over will be: Is there anything in IT that can't be automated with Ansible? Seriously, I'm asking, let me know.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/serverless-kubernetes-and-more
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://serverless.com/blog/10-tips-creating-robust-serverless-components/
|
||||
[3]: https://github.com/serverless-components/
|
||||
[4]: https://www.infoq.com/articles/kubernetes-workloads-serverless-era/
|
||||
[5]: https://opensource.com/article/19/4/interprocess-communication-linux-networking
|
||||
[6]: https://thenewstack.io/how-developers-can-survive-the-last-mile-with-codeready-workspaces/
|
||||
[7]: https://prometheus.io/
|
||||
[8]: https://knative.dev/
|
||||
[9]: https://www.envoyproxy.io/
|
||||
[10]: https://www.fluentd.org/
|
||||
[11]: https://kubernetes.io/
|
||||
[12]: https://www.redhat.com/en/blog/automate-security-increasingly-complex-hybrid-environments
|
||||
[13]: https://www.securityforum.org/
|
||||
[14]: https://www.prnewswire.com/news-releases/information-security-forum-forecasts-2019-global-security-threat-outlook-300757408.html
|
||||
[15]: https://www.techrepublic.com/article/top-4-security-threats-businesses-should-expect-in-2019/
|
||||
[16]: https://agenda.fest.ansible.com/sessions
|
@ -1,57 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Semiconductor startup Cerebras Systems launches massive AI chip)
|
||||
[#]: via: (https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Semiconductor startup Cerebras Systems launches massive AI chip
|
||||
======
|
||||
|
||||
![Cerebras][1]
|
||||
|
||||
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel.
|
||||
|
||||
Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 inches, making it almost the size of an iPad and more than 50 times larger than a CPU or GPU. A typical CPU or GPU is about the size of a postage stamp.
|
||||
|
||||
[Now see how AI can boost data-center availability and efficiency.][2]
|
||||
|
||||
Cerebras won’t sell the chips to ODMs due to the challenges of building and cooling such a massive chip. Instead, it will come as part of a complete server to be installed in data centers, which it says will start shipping in October.
|
||||
|
||||
The logic behind the design is that AI requires huge amounts of data just to run a test and current technology, even GPUs, are not fast or powerful enough. So Cerebras supersized the chip.
|
||||
|
||||
The numbers are just incredible. The company’s WSE chip has 1.2 trillion transistors, 400,000 computing cores and 18 gigabytes of memory. A typical PC processor has about 2 billion transistors, four to six cores and a few megabytes of cache memory. Even a high-end GPU has 21 billion transistors and a few thousand cores.
|
||||
|
||||
The 400,000 cores on the WSE are connected via the Swarm communication fabric in a 2D mesh with 100 Pb/s of bandwidth. The WSE has 18 GB of on-chip memory, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth. This is 3000x more capacity and 10,000x greater bandwidth than the best Nvidia has to offer. More to the point it eliminates the need to move data in and out of memory to and from the CPU.
|
||||
|
||||
“A vast array of programmable cores provides cluster-scale compute on a single chip. High-speed memory close to each core ensures that cores are always occupied doing calculations. And by connecting everything on-die, communication is many thousands of times faster than what is possible with off-chip technologies like InfiniBand,” the company said in a [blog post][3] announcing the processor.
|
||||
|
||||
The cores are called Sparse Linear Algebra Cores, or SLA. They are optimized for the sparse linear algebra that is fundamental to neural network calculation. These cores are designed specifically for AI work. They are small and fast, contain no caches, and have eliminated other features and overheads that are needed in general purpose cores but play no useful role in a deep learning processor.
|
||||
|
||||
The chip is the brainchild of Andrew Feldman, who created the SeaMicro high density Atom-based server a decade ago as an alternative to overpowered Xeons for doing simple tasks like file and print or serving LAMP stacks. Feldman is a character, one of the more interesting people [I’ve interviewed][4]. He definitely thinks outside the box.
|
||||
|
||||
Feldman sold SeaMicro to AMD for $334 million in 2012, which turned out to be a colossal waste of money on AMD’s part, as the product shortly disappeared from the market. Since then he’s raised $100 million in VC money.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/cerebras-wafer-scale-engine-100809084-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
|
||||
[3]: https://www.cerebras.net/hello-world/
|
||||
[4]: https://www.serverwatch.com/news/article.php/3887471/SeaMicro-Launches-an-AtomPowered-Cloud-Computing-Server.htm
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world)
|
||||
[#]: via: (https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world
|
||||
======
|
||||
VMware will spend $2.7 billion on cloud-application developer Pivotal and $2.1 billion for security vendor Carbon Black - details at next week's VMworld user conference
|
||||
![Bigstock][1]
|
||||
|
||||
All things cloud are major topics of conversation at the VMworld user conference next week, ratcheded up a notch by VMware's $4.8 billion plans to acquire cloud development firm Pivotal and security provider Carbon Black.
|
||||
|
||||
VMware said during its quarterly financial call this week it would spend about $2.7 billion on Pivotal and its Cloud Foundry hybrid cloud development technology, and about $2.1 billion for the security technology of Carbon Black, which includes its Predictive Security Cloud and other endpoint-security software. Both amounts represent the [enterprise value][2] of the deals the actual purchase prices will vary, experts said.
|
||||
|
||||
**[ Check out [What is hybrid cloud computing][3] and learn [what you need to know about multi-cloud][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
VMware has deep relationships with both companies. Carbon Black technology is part of [VMware’s AppDefense][6] endpoint security. Pivotal has a deeper relationship in that VMware and Dell, VMware’s parent company, [spun out Pivotal][7] in 2013.
|
||||
|
||||
“These acquisitions address two critical technology priorities of all businesses today – building modern, enterprise-grade applications and protecting enterprise workloads and clients. With these actions we meaningfully accelerate our subscription and SaaS offerings and expand our ability to enable our customers’ digital transformation,” said VMware CEO Pat Gelsinger, on the call.
|
||||
|
||||
With regards to the Pivotal acquisition Gelsinger said the time was right to own the whole compute stack. “We will now be uniquely positioned to help customers build, run and manage their cloud environment, and customers can go one place to get all of this technology,” Gelsinger said. “We embed the technology in our core VMware platform, and we will explain more about that at VMworld next week.”
|
||||
|
||||
On the Carbon Black buy Gelsinger said he expects the technology to be integrated across VMware’s produce families such as NSX networking software and vSphere, VMware's flagship virtualization platform.
|
||||
|
||||
“Security is broken and fundamentally customers want a different answer in the security space. We think this move will be an opportunity for major disruption.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
Patric Morley, president and CEO of Carbon Black [wrote of the deal][9]: “VMware has a vision to create a modern security platform for any app, running on any cloud, delivered to any device – essentially, to build security into the fabric of the compute stack. Carbon Black’s cloud-native platform, our ability to see and stop attackers by leveraging the power of our rich data and behavioral analytics, and our deep cybersecurity expertise are all truly differentiating.”
|
||||
|
||||
Both transactions are expected to close in the second half of VMware’s fiscal year, which ends Jan. 31.
|
||||
|
||||
VMware has been on a massive buying spree this year that has included:
|
||||
|
||||
* Avi Networks for multi-cloud application delivery services.
|
||||
* Bitfusion for hardware virtualization.
|
||||
* Uhana, a company that is employing deep learning and real-time AI in carrier networks and applications, to automate network operations and optimize application experience.
|
||||
* Veriflow, for network verification, assurance, and troubleshooting.
|
||||
* Heptio for its Kubernetes technology.
|
||||
|
||||
|
||||
|
||||
Kubernetes integration will be a big topic at VMworld, Gelsinger hinted. “You will hear very specific announcements about how Heptio will be used. [And] we will be announcing major expansions of our Kubernetes and modern apps portfolio and help Pivotal complete that strategy. Together with Heptio and Pivotal, VMware will offer a comprehensive Kubernetes-based portfolio to build, run and manage modern applications on any cloud,” Gelsinger said.
|
||||
|
||||
“VMware has increased its Kubernetes-related investments over the past year with the acquisition of Heptio to become a top-three contributor to Kubernetes, and at VMworld we will describe a major R&D effort to evolve VMware vSphere into a native Kubernetes platform for VMs and containers.”
|
||||
|
||||
Other updates about where VMware vSphere and NSX-T are headed will also be hot topics.
|
||||
|
||||
Introduced in 2017, NSX-T Data Center software is targeted at organizations looking to support multivendor cloud-native applications, [bare-metal][10] workloads, [hypervisor][11] environments and the growing hybrid and multi-cloud worlds. In February the [company anointed NSX-T][12] the company’s go-to platform for future software-defined cloud developments.
|
||||
|
||||
VMware is battling Cisco's Application Centric Infrastructure, Juniper's Contrail system and other platforms from vendors including Pluribus, Arista and Big Switch. How NSX-T evolves will be key to how well VMware competes.
|
||||
|
||||
The most recent news around vSphere was that new features of its Hybrid Cloud Extension application-mobility software enables non-vSphere as well as increased on-premises application workloads to migrate to a variety of specific cloud services. Introduced in 2017, [VMware HCX][13] lets vSphere customers tie on-premises systems and applications to cloud services.
|
||||
|
||||
The HCX announcement was part of VMware’s continued evolution into cloud technologies. In July the company teamed with [Google][14] to natively support VMware workloads in its Google Cloud service, giving customers more options for deploying enterprise applications.
|
||||
|
||||
Further news about that relationship is likely at VMworld as well.
|
||||
|
||||
VMware also has a hybrid cloud partnership with [Microsoft’s Azure cloud service][15]. That package, called Azure VMware Solutions is built on VMware Cloud Foundation, which is a packag of vSphere with NSX network-virtualization and VSAN software-defined storage-area-network platform. The company is expected to update developments with that platform as well.
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/hybridcloud-100808516-large.jpg
|
||||
[2]: http://valuationacademy.com/what-is-the-enterprise-value-ev/
|
||||
[3]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||
[4]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.networkworld.com/article/3359242/vmware-firewall-takes-aim-at-defending-apps-in-data-center-cloud.html
|
||||
[7]: https://www.networkworld.com/article/2225739/what-is-pivotal--emc-and-vmware-want-it-to-be-your-platform-for-building-big-data-apps.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://www.carbonblack.com/2019/08/22/the-next-chapter-in-our-story-vmware-carbon-black/
|
||||
[10]: https://www.networkworld.com/article/3261113/why-a-bare-metal-cloud-provider-might-be-just-what-you-need.html?nsdr=true
|
||||
[11]: https://www.networkworld.com/article/3243262/what-is-a-hypervisor.html?nsdr=true
|
||||
[12]: https://www.networkworld.com/article/3346017/vmware-preps-milestone-nsx-release-for-enterprise-cloud-push.html
|
||||
[13]: https://docs.vmware.com/en/VMware-HCX/services/rn/VMware-HCX-Release-Notes.html
|
||||
[14]: https://www.networkworld.com/article/3428497/google-cloud-to-offer-vmware-data-center-tools-natively.html
|
||||
[15]: https://www.networkworld.com/article/3113394/vmware-cloud-foundation-integrates-virtual-compute-network-and-storage-systems.html
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
@ -1,72 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Implementing edge computing, DevOps like car racing, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/8/implementing-edge-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Implementing edge computing, DevOps like car racing, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [How to implement edge computing][2]
|
||||
|
||||
> "When you have hundreds or thousands of locations, it's a challenge to manage all of that compute as you continue to scale it out at the edge," said Coufal. "For organizations heavily involved with IoT, there are cases where these enterprises can find themselves with millions of different endpoints to manage. This is where you need to automate as much as you can operationally so there is less need for humans to manage the day-to-day activities."
|
||||
|
||||
**The impact:** We may think that there is a lot of stuff hooked up to the internet already, but edge connected Internet of Things (IoT) devices are already proving we ain't seen nothing yet. A heuristic that breaks the potential billions of endpoints into three categories (at least in a business context) helps us think about what this IoT might actually do for us, and who should be responsible for what.
|
||||
|
||||
## [Can a composable hypervisor re-imagine virtualization?][3]
|
||||
|
||||
> Van de Ven explained that in talking with customers he has seen five areas emerge as needing re-imagining in order to support evolving virtualization plans. These include a platform that is lightweight; one that is fast; something that can support high density workloads; that has quick start up; and one that is secure. However, the degrees of those needs remains in flux.
|
||||
>
|
||||
> Van de Ven explained that a [composable][4] hypervisor was one way to deal with these varying needs, pointing to Intel’s work with the [recently launched][5] rust-vmm hypervisor.
|
||||
>
|
||||
> That [open source project][6] provides a set of common hypervisor components developed by contributing vendors that can provide a more secure, higher performance container technology designed for [cloud native][7] environments.
|
||||
|
||||
**The impact**: The container boom has been perhaps unprecedented in both the rapidness of its onset and the breadth of its impact. You'd be forgiven for thinking that all the innovation has moved on from virtualization; not so! For one thing, most of those containers are running in virtual machines, and there are still places where virtual machines outshine containers (particularly where security is concerned). Thankfully there are projects pushing the state of hypervisors and virtualization forward.
|
||||
|
||||
## [How DevOps is like auto racing][8]
|
||||
|
||||
> To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome.
|
||||
|
||||
**The impact**: Sometimes the best way to understand the impact of an idea is to re-imagine the stakes. Here we recontextualize the moving and configuration of bits as the direction of explosive power and get a better understanding of why process, roles, and responsibilities are important contributors to success.
|
||||
|
||||
## [CNCF archives the rkt project][9]
|
||||
|
||||
> All open source projects are subject to a lifecycle and can become less active for a number of reasons. In rkt’s case, despite its initial popularity following its creation in December 2014, and contribution to CNCF in March 2017, end user adoption has severely declined. The CNCF is also [home][10] to other container runtime projects: [containerd][11] and [CRI-O][12], and while the rkt project played an important part in the early days of cloud native adoption, in recent times user adoption has trended away from rkt towards these other projects. Furthermore, [project activity][13] and the number of contributors has also steadily declined over time, along with unpatched CVEs.
|
||||
|
||||
**The impact**: Betamax and laser discs pushed cassettes and DVDs to be better, and so it is with rkt. The project showed there is more than one way to run containers at a time when it looked like there was only one way to run containers. rkt galvanized a push towards standard interfaces in the container space, and for that, we are eternally grateful.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/implementing-edge-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.techrepublic.com/article/how-to-implement-edge-computing/
|
||||
[3]: https://www.sdxcentral.com/articles/news/can-a-composable-hypervisor-re-imagine-virtualization/2019/08/
|
||||
[4]: https://www.sdxcentral.com/data-center/composable/definitions/what-is-composable-infrastructure-definition/ (What is Composable Infrastructure? Definition)
|
||||
[5]: https://www.sdxcentral.com/articles/news/intel-pushes-open-source-hypervisor-with-cloud-giants/2019/05/
|
||||
[6]: https://github.com/rust-vmm
|
||||
[7]: https://www.sdxcentral.com/cloud-native/ (Cloud Native)
|
||||
[8]: https://developers.redhat.com/blog/2019/08/22/how-devops-is-like-auto-racing/
|
||||
[9]: https://www.cncf.io/blog/2019/08/16/cncf-archives-the-rkt-project/
|
||||
[10]: https://landscape.cncf.io/category=container-runtime&format=card-mode
|
||||
[11]: https://containerd.io/
|
||||
[12]: https://cri-o.io/
|
||||
[13]: https://rkt.devstats.cncf.io
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mellanox introduces SmartNICs to eliminate network load on CPUs)
|
||||
[#]: via: (https://www.networkworld.com/article/3433924/mellanox-introduces-smartnics-to-eliminate-network-load-on-cpus.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Mellanox introduces SmartNICs to eliminate network load on CPUs
|
||||
======
|
||||
Mellanox unveiled two processors designed to offload network workloads from the CPU -- ConnectX-6 Dx and BlueField-2 – freeing the CPU to do its processing job.
|
||||
![Natali Mis / Getty Images][1]
|
||||
|
||||
If you were wondering what prompted Nvidia to [shell out nearly $7 billion for Mellanox Technologies][2], here’s your answer: The networking hardware provider has introduced a pair of processors for offloading network workloads from the CPU.
|
||||
|
||||
ConnectX-6 Dx and BlueField-2 are cloud SmartNICs and I/O Processing Unit (IPU) solutions, respectively, designed to take the work of network processing off the CPU, freeing it to do its processing job.
|
||||
|
||||
**[ Learn more about SDN: Find out [where SDN is going][3] and learn the [difference between SDN and NFV][4]. | Get regularly scheduled insights: [Sign up for Network World newsletters][5]. ]**
|
||||
|
||||
The company promises up to 200Gbit/sec throughput with ConnectX and BlueField. It said the market for 25Gbit and faster Ethernet was 31% of the total market last year and will grow to 61% next year. With the internet of things (IoT) and artificial intelligence (AI), a lot of data needs to be moved around and Ethernet needs to get a lot faster.
|
||||
|
||||
“The whole vision of [software-defined networking] and NVMe-over-Fabric was a nice vision, but as soon as people tried it in the data center, performance ground to a halt because CPUs couldn’t handle all that data,” said Kevin Deierling, vice president of marketing for Mellanox. “As you do more complex networking, the CPUs are being asked to do all that work on top of running the apps and the hypervisor. It puts a big burden on CPUs if you don’t unload that workload.”
|
||||
|
||||
CPUs are getting larger, with AMD introducing a 64-core Epyc processor and Intel introducing a 56-core Xeon. But keeping those giant CPUs fed is a real challenge. You can’t use a 100Gbit link because the CPU has to look at all that traffic and it gets overwhelmed, argues Deierling.
|
||||
|
||||
“Suddenly 100-200Gbits becomes possible because a CPU doesn’t have to look at every packet and decide which core needs it,” he said.
|
||||
|
||||
The amount of CPU load depends on workload. A telco can have a situation where it’s as much as 70% packet processing. At a minimum workload, 30% of it would be packet processing.
|
||||
|
||||
“Our goal is to bring that to 0% packet processing so the CPU can do what it does best, which is process apps,” he said. Bluefield-2 can process up to 215 million packets per second, Deierling added.
|
||||
|
||||
### ConnectX-6 Dx and BlueField-2 also provide security features
|
||||
|
||||
The two are also focused on offering secure, high-speed interconnects inside the firewall. With standard network security, you have a firewall but minimal security inside the network. So once a hacker breaches your firewall, he often has free reign inside the network.
|
||||
|
||||
With ConnectX-6 Dx and BlueField-2, the latter of which contains a ConnectX-6 Dx processor on the NIC, your internal network communications are also protected, so even if someone breaches your firewall, they can’t get at your data.
|
||||
|
||||
ConnectX-6 Dx SmartNICs provide up to two ports of 25, 50 or 100Gb/s, or a single port of 200Gb/s, Ethernet connectivity powered by 50Gb/s PAM4 SerDes technology and PCIe 4.0 host connectivity. The ConnectX-6 Dx innovative hardware offload engines include IPsec and TLS inline data-in-motion crypto, advanced network virtualization, RDMA over Converged Ethernet (RoCE), and NVMe over Fabrics (NVMe-oF) storage accelerations.
|
||||
|
||||
The BlueField-2 IPU integrates a ConnectX-6 Dx, plus an ARM processor for a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/sec. BlueField-2-based SmartNICs act as a co-processor that puts a computer in front of the computer to transform bare-metal and virtualized environments using advanced software-defined networking, NVMe SNAP storage disaggregation, and enhanced security capabilities.
|
||||
|
||||
Both ConnectX6 Dx and BlueField-2 are due in the fourth quarter.
|
||||
|
||||
### Partnering with Nvidia
|
||||
|
||||
Mellanox is in the process of being acquired by Nvidia, but the two suitors are hardly waiting for government approval. At VMworld, Mellanox announced that its Remote Direct Memory Access (RDMA) networking solutions for VMware vSphere will enable virtualized machine learning with better GPU utilization and efficiency.
|
||||
|
||||
Benchmarks found Nvidia’s virtualized GPUs see a two-fold increase in efficiency by using VMware’s paravirtualized RDMA (PVRDMA) technology than when using traditional networking protocols. And that was when connecting Nvidia T4 GPUs with Mellanox’s ConnectX-5 100 GbE SmartNICs, the older generation that is supplanted by today’s announcement.
|
||||
|
||||
The PVRDMA Ethernet solution enables VM-to-VM communication over RDMA, which boosts data communication performance in virtualized environments while achieving significantly higher efficiency compared with legacy TCP/IP transports. This translates into optimized server and GPU utilization, reduced machine learning training time, and improved scalability.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433924/mellanox-introduces-smartnics-to-eliminate-network-load-on-cpus.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/cso_identity_access_management_abstract_network_connections_circuits_reflected_in_eye_by_natali_mis_gettyimages-654791312_2400x1600-100808178-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3356444/nvidia-grabs-mellanox-out-from-under-intels-nose.html
|
||||
[3]: https://www.networkworld.com/article/3209131/lan-wan/what-sdn-is-and-where-its-going.html
|
||||
[4]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Endless Grants $500,000 Fund To GNOME Foundation’s Coding Education Challenge)
|
||||
[#]: via: (https://itsfoss.com/endless-gnome-coding-education-challenge/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Endless Grants $500,000 Fund To GNOME Foundation’s Coding Education Challenge
|
||||
======
|
||||
|
||||
The [GNOME foundation][1] recently announced the “**Coding Education Challenge**“, which is a three-stage competition to offer educators and students the opportunity to share their innovative ideas (projects) to teach coding with free and open-source software.
|
||||
|
||||
For the funding (that covers the reward), [Endless][2] has issued a $500,000 (half a million) grant to support the competition and attract more educators/students from across the world. Yes, that is a whole lot of money to be awarded to the team (or individual) that wins the competition.
|
||||
|
||||
In case you didn’t know about **Endless**, here’s a background for you – _they work on increasing digital access to children and help them to make the most out of it while also educating them about it_. Among other projects, they have [Endless OS Linux distribution][3]. They also have [inexpensive mini PCs running Linux][4] to help their educational projects.
|
||||
|
||||
In the [press release][5], **Neil McGovern**, Executive Director, GNOME Foundation mentioned:
|
||||
|
||||
> “We’re very grateful that Endless has come forward to provide more opportunities for individuals to learn about free and open-source ”
|
||||
|
||||
He also added:
|
||||
|
||||
> “We’re excited to see what can be achieved when we empower the creativity and imagination of our global community. We hope to make powerful partnerships between students and educators to explore the possibilities of our rich and diverse software ecosystem. Reaching the next generation of developers is crucial to ensuring that free software continues for many years in the future.”
|
||||
|
||||
**Matt Dalio**, founder of Endless, also shared his thoughts about this competition:
|
||||
|
||||
> “We fully believe in GNOME’s mission of making technology available and providing the tools of digital agency to all. What’s so unique about the GNOME Project is that it delivers a fully-working personal computer system, which is a powerful real-world vehicle to teach kids to code. There are so many potential ways for this competition to build flourishing ecosystems that empower the next generation to create, learn and build.”
|
||||
|
||||
In addition to the announcement of competition and the grant, we do not have more details. However, anyone can submit a proposal for the competition (an individual or a team). Also, it has been decided that there will be 20 winners for the first round and will be rewarded **$6500** each for their ideas.
|
||||
|
||||
[][6]
|
||||
|
||||
Suggested read StationX Announces New Laptop Customized for Manjaro Linux
|
||||
|
||||
For the second stage of the competition, the winners will be asked to provide a working prototype from which 5 winners will be filtered to get **$25,000** each as the prize money.
|
||||
|
||||
In the final stage will involve making an end-product where only two winners will be selected. The runners up shall get **$25,000** and the winner walks away with **$100,000**.
|
||||
|
||||
_**Wrapping Up**_
|
||||
|
||||
I’d love to watch out for more details on ‘Coding Education Challenge’ by GNOME Foundation. We shall update this article for more details on the competition.
|
||||
|
||||
While the grant makes it look like a great initiative by GNOME Foundation, what do you think about it? Feel free to share your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/endless-gnome-coding-education-challenge/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnome.org/
|
||||
[2]: https://endlessnetwork.com/
|
||||
[3]: https://endlessos.com/home/
|
||||
[4]: https://endlessos.com/computers/
|
||||
[5]: https://www.gnome.org/news/2019/08/gnome-foundation-launches-coding-education-challenge/
|
||||
[6]: https://itsfoss.com/stationx-manjaro-linux/
|
@ -1,72 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Exploit found in Supermicro motherboards could allow for remote hijacking)
|
||||
[#]: via: (https://www.networkworld.com/article/3435123/exploit-found-in-supermicro-motherboards-could-allow-for-remote-hijacking.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Exploit found in Supermicro motherboards could allow for remote hijacking
|
||||
======
|
||||
The vulnerability impacts three models of Supermicro motherboards. Fortunately, a fix is already available.
|
||||
IDG / Thinkstock
|
||||
|
||||
A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available.
|
||||
|
||||
Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11.
|
||||
|
||||
**[ Also see: [What to consider when deploying a next-generation firewall][1] | Get regularly scheduled insights: [Sign up for Network World newsletters][2] ]**
|
||||
|
||||
BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server.
|
||||
|
||||
Normally BMCs are locked down within the network in order to prevent this kind of malicious access in the first place. In some cases, BMCs are left open to the internet so they can be accessed from a web browser, and those interfaces are not terribly secure. That’s what Eclypsium found.
|
||||
|
||||
For its BMC management console, Supermicro uses an app called virtual media application. This application allows admins to remotely mount images from USB devices and CD or DVD-ROM drives.
|
||||
|
||||
When accessed remotely, the virtual media service allows for plaintext authentication, sends most of the traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass, [according to Eclypsium][3].
|
||||
|
||||
Eclypsium was more diplomatic than I, so I’ll say it: Supermicro was sloppy.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
|
||||
|
||||
These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.
|
||||
|
||||
"This means attackers can attack the server in the same way as if they had physical access to a USB port, such as loading a new operating system image or using a keyboard and mouse to modify the server, implant malware, or even disable the device entirely," Eclypsium wrote in its blog post.
|
||||
|
||||
All told, the team found four different flaws within the virtual media service of the BMC's web control interface.
|
||||
|
||||
### How an attacker could exploit the Supermicro flaws
|
||||
|
||||
According to Eclypsium, the easiest way to attack the virtual media flaws is to find a server with the default login or brute force an easily guessed login (root or admin). In other cases, the flaws would have to be targeted.
|
||||
|
||||
Normally, access to the virtual media service is conducted by a small Java application served on the BMC’s web interface. This application then connects to the virtual media service listening on TCP port 623 on the BMC. A scan by Eclypsium on port 623 turned up 47,339 exposed BMCs around the world.
|
||||
|
||||
Eclypsium did the right thing and contacted Supermicro and waited for the vendor to release [an update to fix the vulnerabilities][5] before going public. Supermicro thanked Eclypsium for not only bringing this issue to its attention but also helping validate the fixes.
|
||||
|
||||
Eclypsium is on quite the roll. In July it disclosed BMC [vulnerabilities in motherboards from Lenovo, Gigabyte][6] and other vendors, and last month it [disclosed flaws in 40 device drivers][7] from 20 vendors that could be exploited to deploy malware.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3435123/exploit-found-in-supermicro-motherboards-could-allow-for-remote-hijacking.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://eclypsium.com/2019/09/03/usbanywhere-bmc-vulnerability-opens-servers-to-remote-attack/
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[5]: https://www.supermicro.com/support/security_BMC_virtual_media.cfm
|
||||
[6]: https://eclypsium.com/2019/07/16/vulnerable-firmware-in-the-supply-chain-of-enterprise-servers/
|
||||
[7]: https://eclypsium.com/2019/08/10/screwed-drivers-signed-sealed-delivered/
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,52 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Doug Bolden, Dunnet (IF))
|
||||
[#]: via: (http://www.wyrmis.com/games/if/dunnet.html)
|
||||
[#]: author: (W Doug Bolden http://www.wyrmis.com)
|
||||
|
||||
Doug Bolden, Dunnet (IF)
|
||||
======
|
||||
|
||||
### Dunnet (IF)
|
||||
|
||||
#### Review
|
||||
|
||||
When I began becoming a semi-serious hobbyist of IF last year, I mostly focused on Infocom, Adventures Unlimited, other Scott Adams based games, and freeware titles. I went on to buy some from Malinche. I picked up _1893_ and _Futureboy_ and (most recnetly) _Treasures of a Slave Kingdom_. I downloaded a lot of free games from various sites. With all of my research and playing, I never once read anything that talked about a game being bundled with Emacs.
|
||||
|
||||
Partially, this is because I am a Vim guy. But I used to use Emacs. Kind of a lot. For probably my first couple of years with Linux. About as long as I have been a diehard Vim fan, now. I just never explored, it seems.
|
||||
|
||||
I booted up Emacs tonight, and my fonts were hosed. Still do not know exactly why. I surfed some menus to find out what was going wrong and came across a menu option called "Adventure" under Games, which I assumed (I know, I know) meant the Crowther and Woods and 1977 variety. When I clicked it tonight, thinking that it has been a few months since I chased a bird around with a cage in a mine so I can fight off giant snakes or something, I was brought up text involving ends of roads and shovels. Trees, if shaken, that kill me with a coconut. This was not the game I thought it was.
|
||||
|
||||
I dug around (or, in purely technical terms, typed "help") and got directed to [this website][1]. Well, here was an IF I had never touched before. Brand spanking new to me. I had planned to play out some _ToaSK_ tonight, but figured that could wait. Besides, I was not quite in the mood for the jocular fun of S. John Ross's commerical IF outing. I needed something a little more direct, and this apparently it.
|
||||
|
||||
Most of the game plays out just like the _Colossal Cave Adventure_ cousins of the oldschool (generally commercial) IF days. There are items you pick. Each does a single task (well, there could be one exception to this, I guess). You collect treasures. Winning is a combination of getting to the end and turning in the treasures. The game slightly tweaks the formula by allowing multiple drop off points for the treasures. Since there is a weight limit, though, you usually have to drop them off at a particular time to avoid getting stuck. At several times, your "item cache" is flushed, so to speak, meaning you have to go back and replay earlier portions to find out how to bring things foward. Damage to items can occur to stop you from being able to play. Replaying is pretty much unavoidable, unless you guess outcomes just right.
|
||||
|
||||
It also inherits many problems from the era it came. There is a twisty maze. I'm not sure how big it is. I just cheated and looked up a walkthrough for the maze portion. I plan on going back and replaying up to the maze bit and mapping it out, though. I was just mentally and physically beat when I played and knew that I was going to have to call it quits on the game for the night or cheat through the maze. I'm glad I cheated, because there are some interesting things after the maze.
|
||||
|
||||
It also has the same sort of stilted syntax and variable levels of description that the original _Adventure_ had. Looking at one item might give you "there is nothing special about that" while looking at another might give you a sentence of flavor text. Several things mentioned in the background do not exist to the parser, which some do. Part of game play is putting up with experimenting. This includes, in cases, a tendency of room descriptions to be written from the perspective of the first time you enter. I know that the Classroom found towards the end of the game does not mention the South exit, either. There are possibly other times this occured that I didn't notice.
|
||||
|
||||
It's final issue, again coming out of the era it was designed, is random death syndrome. This is not too common, but there are a few places where things that have no initially apparent fatal outcome lead to one anyhow. In some ways, this "fatal outcome" is just the game reaching an unwinnable state. For an example of the former, type "shake trees" in the first room. For an example of the latter, send either the lamp, the key, or the shovel through the ftp without switching ftp modes first. At least with the former, there is a sense of exploration in finding out new ways to die. In IF, creative deaths is a form of victory in their own right.
|
||||
|
||||
_Dunnet_ has a couple of differences from most IF. The former difference is minor. There are little odd descriptions throughout the game. "This room is red" or "The towel has a picture of Snoopy one it" or "There is a cliff here" that do not seem to have an immediate effect on the game. Sure, you can jump over the cliff (and die, obviously) but but it still comes off as a bright spot in the standard description matrix. Towards the end, you will be forced to bring back these details. It makes a neat little diversion of looking around and exploring things. Most of the details are cute and/or add to the surreality of the game overall.
|
||||
|
||||
The other big difference, and the one that greatly increased both my annoyance with and my enjoyment of the game, revolves around the two-three computer oriented scenes in the game. You have to type commands into two different computers throughout. One is a VAX and the other is, um, something like a PC (I forget). In both cases, there are clues to be found by knowing your way around the interface. This is a game for computer folk, so most who play it will have a sense of how to type "ls" or "dir" depending on the OS. But not all, will. Beating the game requires a general sense of computer literacy. You must know what types are in ftp. You must know how to determine what type a file is. You must know how to read a text file on a DOS style prompt. You must know something about protocols and etiquette for logging into ftp servers. All this sort of thing. If you do, or are willing to learn (I looked up some of the stuff online) then you can get past this portion with no problem. But this can be like the maze to some people, requiring several replays to get things right.
|
||||
|
||||
The end result is a quirky but fun game that I wish I had known about before because now I have the feeling that my computer is hiding other secrets from me. Glad to have played. Will likely play again to see how many ways I can die.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.wyrmis.com/games/if/dunnet.html
|
||||
|
||||
作者:[W Doug Bolden][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.wyrmis.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.driver-aces.com/ronnie.html
|
@ -1,111 +0,0 @@
|
||||
My Lisp Experiences and the Development of GNU Emacs
|
||||
======
|
||||
|
||||
> (Transcript of Richard Stallman's Speech, 28 Oct 2002, at the International Lisp Conference).
|
||||
|
||||
Since none of my usual speeches have anything to do with Lisp, none of them were appropriate for today. So I'm going to have to wing it. Since I've done enough things in my career connected with Lisp I should be able to say something interesting.
|
||||
|
||||
My first experience with Lisp was when I read the Lisp 1.5 manual in high school. That's when I had my mind blown by the idea that there could be a computer language like that. The first time I had a chance to do anything with Lisp was when I was a freshman at Harvard and I wrote a Lisp interpreter for the PDP-11. It was a very small machine — it had something like 8k of memory — and I managed to write the interpreter in a thousand instructions. This gave me some room for a little bit of data. That was before I got to see what real software was like, that did real system jobs.
|
||||
|
||||
I began doing work on a real Lisp implementation with JonL White once I started working at MIT. I got hired at the Artificial Intelligence Lab not by JonL, but by Russ Noftsker, which was most ironic considering what was to come — he must have really regretted that day.
|
||||
|
||||
During the 1970s, before my life became politicized by horrible events, I was just going along making one extension after another for various programs, and most of them did not have anything to do with Lisp. But, along the way, I wrote a text editor, Emacs. The interesting idea about Emacs was that it had a programming language, and the user's editing commands would be written in that interpreted programming language, so that you could load new commands into your editor while you were editing. You could edit the programs you were using and then go on editing with them. So, we had a system that was useful for things other than programming, and yet you could program it while you were using it. I don't know if it was the first one of those, but it certainly was the first editor like that.
|
||||
|
||||
This spirit of building up gigantic, complicated programs to use in your own editing, and then exchanging them with other people, fueled the spirit of free-wheeling cooperation that we had at the AI Lab then. The idea was that you could give a copy of any program you had to someone who wanted a copy of it. We shared programs to whomever wanted to use them, they were human knowledge. So even though there was no organized political thought relating the way we shared software to the design of Emacs, I'm convinced that there was a connection between them, an unconscious connection perhaps. I think that it's the nature of the way we lived at the AI Lab that led to Emacs and made it what it was.
|
||||
|
||||
The original Emacs did not have Lisp in it. The lower level language, the non-interpreted language — was PDP-10 Assembler. The interpreter we wrote in that actually wasn't written for Emacs, it was written for TECO. It was our text editor, and was an extremely ugly programming language, as ugly as could possibly be. The reason was that it wasn't designed to be a programming language, it was designed to be an editor and command language. There were commands like ‘5l’, meaning ‘move five lines’, or ‘i’ and then a string and then an ESC to insert that string. You would type a string that was a series of commands, which was called a command string. You would end it with ESC ESC, and it would get executed.
|
||||
|
||||
Well, people wanted to extend this language with programming facilities, so they added some. For instance, one of the first was a looping construct, which was < >. You would put those around things and it would loop. There were other cryptic commands that could be used to conditionally exit the loop. To make Emacs, we (1) added facilities to have subroutines with names. Before that, it was sort of like Basic, and the subroutines could only have single letters as their names. That was hard to program big programs with, so we added code so they could have longer names. Actually, there were some rather sophisticated facilities; I think that Lisp got its unwind-protect facility from TECO.
|
||||
|
||||
We started putting in rather sophisticated facilities, all with the ugliest syntax you could ever think of, and it worked — people were able to write large programs in it anyway. The obvious lesson was that a language like TECO, which wasn't designed to be a programming language, was the wrong way to go. The language that you build your extensions on shouldn't be thought of as a programming language in afterthought; it should be designed as a programming language. In fact, we discovered that the best programming language for that purpose was Lisp.
|
||||
|
||||
It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
|
||||
|
||||
So Bernie saw that an application — a program that does something useful for you — which has Lisp inside it and which you could extend by rewriting the Lisp programs, is actually a very good way for people to learn programming. It gives them a chance to write small programs that are useful for them, which in most arenas you can't possibly do. They can get encouragement for their own practical use — at the stage where it's the hardest — where they don't believe they can program, until they get to the point where they are programmers.
|
||||
|
||||
At that point, people began to wonder how they could get something like this on a platform where they didn't have full service Lisp implementation. Multics MacLisp had a compiler as well as an interpreter — it was a full-fledged Lisp system — but people wanted to implement something like that on other systems where they had not already written a Lisp compiler. Well, if you didn't have the Lisp compiler you couldn't write the whole editor in Lisp — it would be too slow, especially redisplay, if it had to run interpreted Lisp. So we developed a hybrid technique. The idea was to write a Lisp interpreter and the lower level parts of the editor together, so that parts of the editor were built-in Lisp facilities. Those would be whatever parts we felt we had to optimize. This was a technique that we had already consciously practiced in the original Emacs, because there were certain fairly high level features which we re-implemented in machine language, making them into TECO primitives. For instance, there was a TECO primitive to fill a paragraph (actually, to do most of the work of filling a paragraph, because some of the less time-consuming parts of the job would be done at the higher level by a TECO program). You could do the whole job by writing a TECO program, but that was too slow, so we optimized it by putting part of it in machine language. We used the same idea here (in the hybrid technique), that most of the editor would be written in Lisp, but certain parts of it that had to run particularly fast would be written at a lower level.
|
||||
|
||||
Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design. The low level language was not machine language anymore, it was C. C was a good, efficient language for portable programs to run in a Unix-like operating system. There was a Lisp interpreter, but I implemented facilities for special purpose editing jobs directly in C — manipulating editor buffers, inserting leading text, reading and writing files, redisplaying the buffer on the screen, managing editor windows.
|
||||
|
||||
Now, this was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, and was referred to as GosMacs. A strange thing happened with him. In the beginning, he seemed to be influenced by the same spirit of sharing and cooperation of the original Emacs. I first released the original Emacs to people at MIT. Someone wanted to port it to run on Twenex — it originally only ran on the Incompatible Timesharing System we used at MIT. They ported it to Twenex, which meant that there were a few hundred installations around the world that could potentially use it. We started distributing it to them, with the rule that “you had to send back all of your improvements” so we could all benefit. No one ever tried to enforce that, but as far as I know people did cooperate.
|
||||
|
||||
Gosling did, at first, seem to participate in this spirit. He wrote in a manual that he called the program Emacs hoping that others in the community would improve it until it was worthy of that name. That's the right approach to take towards a community — to ask them to join in and make the program better. But after that he seemed to change the spirit, and sold it to a company.
|
||||
|
||||
At that time I was working on the GNU system (a free software Unix-like operating system that many people erroneously call “Linux”). There was no free software Emacs editor that ran on Unix. I did, however, have a friend who had participated in developing Gosling's Emacs. Gosling had given him, by email, permission to distribute his own version. He proposed to me that I use that version. Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as ‘mocklisp’, which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.
|
||||
|
||||
I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs.
|
||||
|
||||
The one exception was redisplay. For a long time, redisplay was sort of an alternate world. The editor would enter the world of redisplay and things would go on with very special data structures that were not safe for garbage collection, not safe for interruption, and you couldn't run any Lisp programs during that. We've changed that since — it's now possible to run Lisp code during redisplay. It's quite a convenient thing.
|
||||
|
||||
This second Emacs program was ‘free software’ in the modern sense of the term — it was part of an explicit political campaign to make software free. The essence of this campaign was that everybody should be free to do the things we did in the old days at MIT, working together on software and working with whomever wanted to work with us. That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge.
|
||||
|
||||
At the time, you could make a computer that was about the same price range as other computers that weren't meant for Lisp, except that it would run Lisp much faster than they would, and with full type checking in every operation as well. Ordinary computers typically forced you to choose between execution speed and good typechecking. So yes, you could have a Lisp compiler and run your programs fast, but when they tried to take `car` of a number, it got nonsensical results and eventually crashed at some point.
|
||||
|
||||
The Lisp machine was able to execute instructions about as fast as those other machines, but each instruction — a car instruction would do data typechecking — so when you tried to get the car of a number in a compiled program, it would give you an immediate error. We built the machine and had a Lisp operating system for it. It was written almost entirely in Lisp, the only exceptions being parts written in the microcode. People became interested in manufacturing them, which meant they should start a company.
|
||||
|
||||
There were two different ideas about what this company should be like. Greenblatt wanted to start what he called a “hacker” company. This meant it would be a company run by hackers and would operate in a way conducive to hackers. Another goal was to maintain the AI Lab culture (3). Unfortunately, Greenblatt didn't have any business experience, so other people in the Lisp machine group said they doubted whether he could succeed. They thought that his plan to avoid outside investment wouldn't work.
|
||||
|
||||
Why did he want to avoid outside investment? Because when a company has outside investors, they take control and they don't let you have any scruples. And eventually, if you have any scruples, they also replace you as the manager.
|
||||
|
||||
So Greenblatt had the idea that he would find a customer who would pay in advance to buy the parts. They would build machines and deliver them; with profits from those parts, they would then be able to buy parts for a few more machines, sell those and then buy parts for a larger number of machines, and so on. The other people in the group thought that this couldn't possibly work.
|
||||
|
||||
Greenblatt then recruited Russell Noftsker, the man who had hired me, who had subsequently left the AI Lab and created a successful company. Russell was believed to have an aptitude for business. He demonstrated this aptitude for business by saying to the other people in the group, “Let's ditch Greenblatt, forget his ideas, and we'll make another company.” Stabbing in the back, clearly a real businessman. Those people decided they would form a company called Symbolics. They would get outside investment, not have scruples, and do everything possible to win.
|
||||
|
||||
But Greenblatt didn't give up. He and the few people loyal to him decided to start Lisp Machines Inc. anyway and go ahead with their plans. And what do you know, they succeeded! They got the first customer and were paid in advance. They built machines and sold them, and built more machines and more machines. They actually succeeded even though they didn't have the help of most of the people in the group. Symbolics also got off to a successful start, so you had two competing Lisp machine companies. When Symbolics saw that LMI was not going to fall flat on its face, they started looking for ways to destroy it.
|
||||
|
||||
Thus, the abandonment of our lab was followed by “war” in our lab. The abandonment happened when Symbolics hired away all the hackers, except me and the few who worked at LMI part-time. Then they invoked a rule and eliminated people who worked part-time for MIT, so they had to leave entirely, which left only me. The AI lab was now helpless. And MIT had made a very foolish arrangement with these two companies. It was a three-way contract where both companies licensed the use of Lisp machine system sources. These companies were required to let MIT use their changes. But it didn't say in the contract that MIT was entitled to put them into the MIT Lisp machine systems that both companies had licensed. Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was.
|
||||
|
||||
So Symbolics came up with a plan (4). They said to the lab, “We will continue making our changes to the system available for you to use, but you can't put it into the MIT Lisp machine system. Instead, we'll give you access to Symbolics' Lisp machine system, and you can run it, but that's all you can do.”
|
||||
|
||||
This, in effect, meant that they demanded that we had to choose a side, and use either the MIT version of the system or the Symbolics version. Whichever choice we made determined which system our improvements went to. If we worked on and improved the Symbolics version, we would be supporting Symbolics alone. If we used and improved the MIT version of the system, we would be doing work available to both companies, but Symbolics saw that we would be supporting LMI because we would be helping them continue to exist. So we were not allowed to be neutral anymore.
|
||||
|
||||
Up until that point, I hadn't taken the side of either company, although it made me miserable to see what had happened to our community and the software. But now, Symbolics had forced the issue. So, in an effort to help keep Lisp Machines Inc. going (5) — I began duplicating all of the improvements Symbolics had made to the Lisp machine system. I wrote the equivalent improvements again myself (i.e., the code was my own).
|
||||
|
||||
After a while (6), I came to the conclusion that it would be best if I didn't even look at their code. When they made a beta announcement that gave the release notes, I would see what the features were and then implement them. By the time they had a real release, I did too.
|
||||
|
||||
In this way, for two years, I prevented them from wiping out Lisp Machines Incorporated, and the two companies went on. But, I didn't want to spend years and years punishing someone, just thwarting an evil deed. I figured they had been punished pretty thoroughly because they were stuck with competition that was not leaving or going to disappear (7). Meanwhile, it was time to start building a new community to replace the one that their actions and others had wiped out.
|
||||
|
||||
The Lisp community in the 70s was not limited to the MIT AI Lab, and the hackers were not all at MIT. The war that Symbolics started was what wiped out MIT, but there were other events going on then. There were people giving up on cooperation, and together this wiped out the community and there wasn't much left.
|
||||
|
||||
Once I stopped punishing Symbolics, I had to figure out what to do next. I had to make a free operating system, that was clear — the only way that people could work together and share was with a free operating system.
|
||||
|
||||
At first, I thought of making a Lisp-based system, but I realized that wouldn't be a good idea technically. To have something like the Lisp machine system, you needed special purpose microcode. That's what made it possible to run programs as fast as other computers would run their programs and still get the benefit of typechecking. Without that, you would be reduced to something like the Lisp compilers for other machines. The programs would be faster, but unstable. Now that's okay if you're running one program on a timesharing system — if one program crashes, that's not a disaster, that's something your program occasionally does. But that didn't make it good for writing the operating system in, so I rejected the idea of making a system like the Lisp machine.
|
||||
|
||||
I decided instead to make a Unix-like operating system that would have Lisp implementations to run as user programs. The kernel wouldn't be written in Lisp, but we'd have Lisp. So the development of that operating system, the GNU operating system, is what led me to write the GNU Emacs. In doing this, I aimed to make the absolute minimal possible Lisp implementation. The size of the programs was a tremendous concern.
|
||||
|
||||
There were people in those days, in 1985, who had one-megabyte machines without virtual memory. They wanted to be able to use GNU Emacs. This meant I had to keep the program as small as possible.
|
||||
|
||||
For instance, at the time the only looping construct was ‘while’, which was extremely simple. There was no way to break out of the ‘while’ statement, you just had to do a catch and a throw, or test a variable that ran the loop. That shows how far I was pushing to keep things small. We didn't have ‘caar’ and ‘cadr’ and so on; “squeeze out everything possible” was the spirit of GNU Emacs, the spirit of Emacs Lisp, from the beginning.
|
||||
|
||||
Obviously, machines are bigger now, and we don't do it that way any more. We put in ‘caar’ and ‘cadr’ and so on, and we might put in another looping construct one of these days. We're willing to extend it some now, but we don't want to extend it to the level of common Lisp. I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.
|
||||
|
||||
That was not the end of the GNU projects involved with Lisp. Later on around 1995, we were looking into starting a graphical desktop project. It was clear that for the programs on the desktop, we wanted a programming language to write a lot of it in to make it easily extensible, like the editor. The question was what it should be.
|
||||
|
||||
At the time, TCL was being pushed heavily for this purpose. I had a very low opinion of TCL, basically because it wasn't Lisp. It looks a tiny bit like Lisp, but semantically it isn't, and it's not as clean. Then someone showed me an ad where Sun was trying to hire somebody to work on TCL to make it the “de-facto standard extension language” of the world. And I thought, “We've got to stop that from happening.” So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large. The idea was that we would have a Scheme interpreter designed to be linked into applications in the same way TCL was linked into applications. We would then recommend that as the preferred extensibility package for all GNU programs.
|
||||
|
||||
There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them. Our idea was that if each extensible application supported Scheme, you could write an implementation of TCL or Python or Perl in Scheme that translates that program into Scheme. Then you could load that into any application and customize it in your favorite language and it would work with other customizations as well.
|
||||
|
||||
As long as the extensibility languages are weak, the users have to use only the language you provided them. Which means that people who love any given language have to compete for the choice of the developers of applications — saying “Please, application developer, put my language into your application, not his language.” Then the users get no choices at all — whichever application they're using comes with one language and they're stuck with [that language]. But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don't have to have a language war anymore. That's what we're hoping ‘Guile’, our scheme interpreter, will do. We had a person working last summer finishing up a translator from Python to Scheme. I don't know if it's entirely finished yet, but for anyone interested in this project, please get in touch. So that's the plan we have for the future.
|
||||
|
||||
I haven't been speaking about free software, but let me briefly tell you a little bit about what that means. Free software does not refer to price; it doesn't mean that you get it for free. (You may have paid for a copy, or gotten a copy gratis.) It means that you have freedom as a user. The crucial thing is that you are free to run the program, free to study what it does, free to change it to suit your needs, free to redistribute the copies of others and free to publish improved, extended versions. This is what free software means. If you are using a non-free program, you have lost crucial freedom, so don't ever do that.
|
||||
|
||||
The purpose of the GNU project is to make it easier for people to reject freedom-trampling, user-dominating, non-free software by providing free software to replace it. For those who don't have the moral courage to reject the non-free software, when that means some practical inconvenience, what we try to do is give a free alternative so that you can move to freedom with less of a mess and less of a sacrifice in practical terms. The less sacrifice the better. We want to make it easier for you to live in freedom, to cooperate.
|
||||
|
||||
This is a matter of the freedom to cooperate. We're used to thinking of freedom and cooperation with society as if they are opposites. But here they're on the same side. With free software you are free to cooperate with other people as well as free to help yourself. With non-free software, somebody is dominating you and keeping people divided. You're not allowed to share with them, you're not free to cooperate or help society, anymore than you're free to help yourself. Divided and helpless is the state of users using non-free software.
|
||||
|
||||
We've produced a tremendous range of free software. We've done what people said we could never do; we have two operating systems of free software. We have many applications and we obviously have a lot farther to go. So we need your help. I would like to ask you to volunteer for the GNU project; help us develop free software for more jobs. Take a look at [http://www.gnu.org/help][1] to find suggestions for how to help. If you want to order things, there's a link to that from the home page. If you want to read about philosophical issues, look in /philosophy. If you're looking for free software to use, look in /directory, which lists about 1900 packages now (which is a fraction of all the free software out there). Please write more and contribute to us. My book of essays, “Free Software and Free Society”, is on sale and can be purchased at [www.gnu.org][2]. Happy hacking!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.gnu.org/gnu/rms-lisp.html
|
||||
|
||||
作者:[Richard Stallman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.gnu.org
|
||||
[1]:https://www.gnu.org/help/
|
||||
[2]:http://www.gnu.org/
|
@ -1,199 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (anonymone )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An Ubuntu User’s Review Of Dell XPS 13 Ubuntu Edition)
|
||||
[#]: via: (https://itsfoss.com/dell-xps-13-ubuntu-review)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
An Ubuntu User’s Review Of Dell XPS 13 Ubuntu Edition
|
||||
======
|
||||
|
||||
_**Brief: Sharing my feel and experience about Dell XPS 13 Kaby Lake Ubuntu edition after using it for over three months.**_
|
||||
|
||||
During Black Friday sale last year, I took the bullet and ordered myself a [Dell XPS 13][1] with the new [Intel Kaby Lake processor][2]. It got delivered in the second week of December and if you [follow It’s FOSS on Facebook][3], you might have seen the [live unboxing][4].
|
||||
|
||||
Though I was tempted to do the review of Dell XPS 13 Ubuntu edition almost at the same time, I knew it won’t be fair. A brand new system will, of course, feel good and work smooth.
|
||||
|
||||
But that’s not the real experience. The real experience of any system comes after weeks, if not months, of use. That’s the reason I hold myself back and waited three months to review Dell XPS Kobylake Ubuntu edition.
|
||||
|
||||
### Dell XPS 13 Ubuntu Edition Review
|
||||
|
||||
Before we saw what’s hot and what’s not in the latest version of Dell XPS 13, I should tell you that I was using an Acer R13 ultrabook book before this. So I may compare the new Dell system with the older Acer one.
|
||||
|
||||
![Dell XPS 13 Ubuntu Edition System Settings][5]![Dell XPS 13 Ubuntu Edition System Settings][5]
|
||||
|
||||
Dell XPS 13 has several versions based on processor. The one I am reviewing is Dell XPS13 MLK (9360). It has i5-7200U 7th generation processor. Since I hardly used the touch screen in Acer Aspire R13, I chose to go with the non-touch version of XPS. This decision also saved me a couple of hundreds of Euro.
|
||||
|
||||
It has 8 GB of LPDDR3 1866MHz RAM and 256 GB SSD PCIe. Graphics is Intel HD. On connectivity side, it’s got Killer 1535 Wi-Fi 802.11ac 2×2 and Bluetooth 4.1. Screen is InfinityEdge Full HD (1 920 x 1080).
|
||||
|
||||
Now, you know what kind of hardware we’ve got here, let’s see what works and what sucks.
|
||||
|
||||
#### Look and feel
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu Edition][6]![Dell XPS 13 Kaby Lake Ubuntu Edition][6]
|
||||
|
||||
At 13.3″, Dell XPS 13 looks even smaller than a regular 13.3″ laptop, thanks to its non-existent bezel which is the specialty of the infinite display. It is light as a feather with weight just under 1.23 Kg.
|
||||
|
||||
The outer surface is metallic, not very shiny but a decent aluminum look. On the interior, the palm rest is made of carbon fiber which is very comfortable at the rest. Unlike the MacBook Air that uses metallic palm rests, the carbon fiber ones are more friendly, especially in winters.
|
||||
|
||||
It is almost centimeter and a half high at it’s thickest part (around hinges). This also adds a plus point to the elegance of XPS 13.
|
||||
|
||||
Overall, Dell XPS 13 has a compact body and an elegant body.
|
||||
|
||||
#### Keyboard and touchpad
|
||||
|
||||
The keyboard and touchpad mix well with the carbon fiber interiors. The keys are smooth with springs in the back (perhaps) and give a rich feel while typing. All of the important keys are present and are not tiny in size, something you might be worried of, considering the overall tiny size of XPS13.
|
||||
|
||||
Oh! the keyboards have backlit support. Which adds to the rich feel of this expensive laptop.
|
||||
|
||||
While the keyboard is a great experience, the same cannot be said about the touchpad. In fact, the touchpad is the weakest part which mars the overall good experience of XPS 13.
|
||||
|
||||
The touchpad has a cheap feeling because it makes an irritating sound while tapping on the right side as if it’s hollow underneath. This is [something that has been noticed in the earlier versions of XPS 13][7] but hasn’t been given enough consideration to fix it. This is something you do not expect from a product at such a price.
|
||||
|
||||
Also, the touchpad scroll on websites is hideous. It is also not suitable for pixel works because of difficulty in moving little adjustments.
|
||||
|
||||
#### Ports
|
||||
|
||||
Dell XPS 13 has two USB 3.0 ports, one of them with PowerShare. If you did not know, [USB 3.0 PowerShare][8] ports allow you to charge external devices even when your system is turned off.
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu edition ports][9]![Dell XPS 13 Kaby Lake Ubuntu edition ports][9]
|
||||
|
||||
It also has a [Thunderbolt][10] (doubles up as [USB Type-C port][11]). It doesn’t have HDMI port, Ethernet port or VGA port. However, all of these three can be used via the Thunderbolt port and external adapters (sold separately).
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu edition ports][12]![Dell XPS 13 Kaby Lake Ubuntu edition ports][12]
|
||||
|
||||
It also has an SD card reader and a headphone jack. In addition to all these, there is an [anti-theft slot][13] (a common security practice in enterprises).
|
||||
|
||||
#### Display
|
||||
|
||||
The model I have packs 1920x1080px. It’s full HD and display quality is at par. It perfectly displays the high definition pictures and 1080p video files.
|
||||
|
||||
I cannot compare it with the [qHD model][14] as I never used it. But considering that there are not enough 4K contents for now, full HD display should be sufficient for next few years.
|
||||
|
||||
#### Sound
|
||||
|
||||
Compared to Acer R13, XPS 13 has better audio quality. Even the max volume is louder than that of Acer R13. The dual speakers give a nice stereo effect.
|
||||
|
||||
#### Webcam
|
||||
|
||||
The weirdest part of Dell XPS 13 review comes now. We all have been accustomed of seeing the webcam at the top-middle position on any laptop. But this is not the case here.
|
||||
|
||||
XPS 13 puts the webcam on the bottom left corner of the laptop. This is done to keep the bezel as thin as possible. But this creates a problem.
|
||||
|
||||
![Image captured with laptop screen at 90 degree][15]
|
||||
|
||||
When you video chat with someone, it is natural to look straight up. With the top-middle webcam, your face is in direct line with the camera. But with the bottom left position of web cam, it looks like those weird accidental selfies you take with the front camera of your smartphone. Heck, people on the other side might see inside of your nostrils.
|
||||
|
||||
#### Battery
|
||||
|
||||
Battery life is the strongest point of Dell XPS 13. While Dell claims an astounding 21-hour battery life, but in my experience, it smoothly gives a battery life of 8-10 hours. This is when I watch movies, browse the internet and other regular stuff.
|
||||
|
||||
There is one strange thing that I noticed, though. It charges pretty quick until 90% but the charging slows down afterward. And it almost never goes beyond 98%.
|
||||
|
||||
The battery indicator turns red when the battery status falls below 30% and it starts displaying notifications if the battery goes below 10%. There is small light indicator under the touchpad that turns yellow when the battery is low and it turns white when the charger is plugged in.
|
||||
|
||||
#### Overheating
|
||||
|
||||
I have previously written about ways to [reduce laptop overheating in Linux][16]. Thankfully, so far, I didn’t need to employ those tricks.
|
||||
|
||||
Dell XPS 13 remains surprisingly cool when you are using it on battery, even in long runs. The bottom does get heated a little when you use it while charging.
|
||||
|
||||
Overall, XPS 13 manages overheating very well.
|
||||
|
||||
#### The Ubuntu experience with Dell XPS 13
|
||||
|
||||
So far we have seen pretty generic things about the Dell XPS 13. Let’s talk about how good a Linux laptop it is.
|
||||
|
||||
Until now, I used to manually [install Linux on Windows laptop][17]. This is the first Linux laptop I ever bought. I would also like to mention the awesome first boot animation of Dell’s Ubuntu laptop. Here’s a YouTube video of the same:
|
||||
|
||||
One thing I would like to mention here is that Dell never displays Ubuntu laptops on its website. You’ll have to search the website with Ubuntu then you’ll see the Ubuntu editions. Also, Ubuntu edition is cheaper just by 50 Euro in comparison to its Windows counterpart whereas I was expecting it to be at least 100 Euro less than that of Windows.
|
||||
|
||||
Despite being an Ubuntu preloaded laptop, the super key still comes with Windows logo on it. It’s trivial but I would have loved to see the Ubuntu logo on it.
|
||||
|
||||
Now talking about Ubuntu experience, the first thing I noticed was that there was no hardware issue. Even the function and media keys work perfectly in Ubuntu, which is a pleasant surprise.
|
||||
|
||||
Dell has also added its own repository in the software sources to provide for some Dell specific tools. You can see the footprints of Dell in the entire system.
|
||||
|
||||
You might be interested to see how Dell partitioned the 256Gb of disk space. Let me show that to you.
|
||||
|
||||
![Default disk partition by Dell][18]
|
||||
|
||||
As you can see, there is 524MB reserved for [EFI][19]. Then there is 3.2 GB of factory restore image perhaps.
|
||||
|
||||
Dell is using 17Gb of Swap partition, which is more than double of the RAM size. It seems Dell didn’t put enough thought here because this is simply waste of disk space, in my opinion. I would have used not [more than 11 GB of Swap partition][20] here.
|
||||
|
||||
As I mentioned before, Dell adds a “restore to factory settings” option in the Grub menu. This is a nice little feature to have.
|
||||
|
||||
One thing which I don’t like in the XPS 13 Ubuntu edition is the long boot time. It takes entire 23 seconds to reach the login screen after pressing the power button. I would expect it to be faster considering that it uses SSD PCIe.
|
||||
|
||||
If it interests you, the XPS 13 had Chromium and Google Chrome browsers installed by default instead of Firefox.
|
||||
|
||||
As far my experience goes, I am fairly impressed with Dell XPS 13 Ubuntu edition. It gives a smooth Ubuntu experience. The laptop seems to be a part of Ubuntu. Though it is an expensive laptop, I would say it is definitely worth the money.
|
||||
|
||||
To summarize, let’s see the good, the bad and the ugly of Dell XPS 13 Ubuntu edition.
|
||||
|
||||
#### The Good
|
||||
|
||||
* Ultralight weight
|
||||
* Compact
|
||||
* Keyboard
|
||||
* Carbon fiber palm rest
|
||||
* Full hardware support for Ubuntu
|
||||
* Factory restore option for Ubuntu
|
||||
* Nice display and sound quality
|
||||
* Good battery life
|
||||
|
||||
|
||||
|
||||
#### The bad
|
||||
|
||||
* Poor touchpad
|
||||
* A little pricey
|
||||
* Long boot time for SSD powered laptop
|
||||
* Windows key still present :P
|
||||
|
||||
|
||||
|
||||
#### The ugly
|
||||
|
||||
* Weird webcam placement
|
||||
|
||||
|
||||
|
||||
How did you like the **Dell XPS 13 Ubuntu edition review** from an Ubuntu user’s point of view? Do you find it good enough to spend over a thousand bucks? Do share your views in the comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/dell-xps-13-ubuntu-review
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://amzn.to/2ImVkCV
|
||||
[2]: http://www.techradar.com/news/computing-components/processors/kaby-lake-intel-core-processor-7th-gen-cpu-news-rumors-and-release-date-1325782
|
||||
[3]: https://www.facebook.com/itsfoss/
|
||||
[4]: https://www.facebook.com/itsfoss/videos/810293905778045/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/Dell-XPS-13-Ubuntu-Edition-spec.jpg?resize=540%2C337&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-Ubuntu-review.jpeg?resize=800%2C600&ssl=1
|
||||
[7]: https://www.youtube.com/watch?v=Yt5SkI0c3lM
|
||||
[8]: http://www.dell.com/support/article/fr/fr/frbsdt1/SLN155147/usb-powershare-feature?lang=EN
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-Ubuntu-XPS-13-Kaby-Lake-ports-1.jpg?resize=800%2C435&ssl=1
|
||||
[10]: https://en.wikipedia.org/wiki/Thunderbolt_(interface)
|
||||
[11]: https://en.wikipedia.org/wiki/USB-C
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-Ubuntu-XPS-13-Kaby-Lake-ports-2.jpg?resize=800%2C325&ssl=1
|
||||
[13]: http://accessories.euro.dell.com/sna/productdetail.aspx?c=ie&l=en&s=dhs&cs=iedhs1&sku=461-10169
|
||||
[14]: https://recombu.com/mobile/article/quad-hd-vs-qhd-vs-4k-ultra-hd-what-does-it-all-mean_M20472.html
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-webcam-issue.jpg?resize=800%2C450&ssl=1
|
||||
[16]: https://itsfoss.com/reduce-overheating-laptops-linux/
|
||||
[17]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-Ubuntu-Edition-disk-partition.jpeg?resize=800%2C448&ssl=1
|
||||
[19]: https://en.wikipedia.org/wiki/EFI_system_partition
|
||||
[20]: https://itsfoss.com/swap-size/
|
@ -1,70 +0,0 @@
|
||||
Inside AGL: Familiar Open Source Components Ease Learning Curve
|
||||
============================================================
|
||||
|
||||
![Matt Porter](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/porter-elce-agl.png?itok=E-5xG98S "Matt Porter")
|
||||
Konsulko’s Matt Porter (pictured) and Scott Murray ran through the major components of the AGL’s Unified Code Base at Embedded Linux Conference Europe.[The Linux Foundation][1]
|
||||
|
||||
Among the sessions at the recent [Embedded Linux Conference Europe (ELCE)][5] — 57 of which are [available on YouTube][2] -- are several reports on the Linux Foundation’s [Automotive Grade Linux project][6]. These include [an overview from AGL Community Manager Walt Miner ][3]showing how AGL’s Unified Code Base (UCB) Linux distribution is expanding from in-vehicle infotainment (IVI) to ADAS. There was even a presentation on using AGL to build a remote-controlled robot (see links below).
|
||||
|
||||
Here we look at the “State of AGL: Plumbing and Services,” from Konsulko Group’s CTO Matt Porter and senior staff software engineer Scott Murray. Porter and Murray ran through the components of the current [UCB 4.0 “Daring Dab”][7] and detailed major upstream components and API bindings, many of which will be appear in the Electric Eel release due in Jan. 2018.
|
||||
|
||||
Despite the automotive focus of the AGL stack, most of the components are already familiar to Linux developers. “It looks a lot like a desktop distro,” Porter told the ELCE attendees in Prague. “All these familiar friends.”
|
||||
|
||||
Some of those friends include the underlying Yocto Project “Poky” with OpenEmbedded foundation, which is topped with layers like oe-core, meta-openembedded, and metanetworking. Other components are based on familiar open source software like systemd (application control), Wayland and Weston (graphics), BlueZ (Bluetooth), oFono (telephony), PulseAudio and ALSA (audio), gpsd (location), ConnMan (Internet), and wpa-supplicant (WiFi), among others.
|
||||
|
||||
UCB’s application framework is controlled through a WebSocket interface to the API bindings, thereby enabling apps to talk to each other. There’s also a new W3C widget for an alternative application packaging scheme, as well as support for SmartDeviceLink, a technology developed at Ford that automatically syncs up IVI systems with mobile phones.
|
||||
|
||||
AGL UCB’s Wayland/Weston graphics layer is augmented with an “IVI shell” that works with the layer manager. “One of the unique requirements of automotive is the ability to separate aspects of the application in the layers,” said Porter. “For example, in a navigation app, the graphics rendering for the map may be completely different than the engine used for the UI decorations. One engine layers to a surface in Wayland to expose the map while the decorations and controls are handled by another layer.”
|
||||
|
||||
For audio, ALSA and PulseAudio are joined by GENIVI AudioManager, which works together with PulseAudio. “We use AudioManager for policy driven audio routing,” explained Porter. “It allows you to write a very complex XML-based policy using a rules engine with audio routing.”
|
||||
|
||||
UCB leans primarily on the well-known [Smack Project][8] for security, and also incorporates Tizen’s [Cynara][9] safe policy-checker service. A Cynara-enabled D-Bus daemon is used to control Cynara security policies.
|
||||
|
||||
Porter and Murray went on to explain AGL’s API binding mechanism, which according to Murray “abstracts the UI from its back-end logic so you can replace it with your own custom UI.” You can re-use application logic with different UI implementations, such as moving from the default Qt to HTML5 or a native toolkit. Application binding requests and responses use JSON via HTTP or WebSocket. Binding calls can be made from applications or from other bindings, thereby enabling “stacking” of bindings.
|
||||
|
||||
Porter and Murray concluded with a detailed description of each binding. These include upstream bindings currently in various stages of development. The first is a Master binding that manages the application lifecycle, including tasks such as install, uninstall, start, and terminate. Other upstream bindings include the WiFi binding and the BlueZ-based Bluetooth binding, which in the future will be upgraded with Bluetooth [PBAP][10] (Phone Book Access Profile). PBAP can connect with contacts databases on your phone, and links to the Telephony binding to replicate caller ID.
|
||||
|
||||
The oFono-based Telephony binding also makes calls to the Bluetooth binding for Bluetooth Hands-Free-Profile (HFP) support. In the future, Telephony binding will add support for sent dial tones, call waiting, call forwarding, and voice modem support.
|
||||
|
||||
Support for AM/FM radio is not well developed in the Linux world, so for its Radio binding, AGL started by supporting [RTL-SDR][11] code for low-end radio dongles. Future plans call for supporting specific automotive tuner devices.
|
||||
|
||||
The MediaPlayer binding is in very early development, and is currently limited to GStreamer based audio playback and control. Future plans call for adding playlist controls, as well as one of the most actively sought features among manufacturers: video playback support.
|
||||
|
||||
Location bindings include the [gpsd][12] based GPS binding, as well as GeoClue and GeoFence. GeoClue, which is built around the [GeoClue][13] D-Bus geolocation service, “overlaps a little with GPS, which uses the same location data,” says Porter. GeoClue also gathers location data from WiFi AP databases, 3G/4G tower info, and the GeoIP database — sources that are useful “if you’re inside or don’t have a good fix,” he added.
|
||||
|
||||
GeoFence depends on the GPS binding, as well. It lets you establish a bounding box, and then track ingress and egress events. GeoFence also tracks “dwell” status, which is determined by arriving at home and staying for 10 minutes. “It then triggers some behavior based on a timeout,” said Porter. Future plans call for a customizable dwell transition time.
|
||||
|
||||
While most of these Upstream bindings are well established, there are also Work in Progress (WIP) bindings that are still in the early stages, including CAN, HomeScreen, and WindowManager bindings. Farther out, there are plans to add speech recognition and text-to-speech bindings, as well as a WWAN modem binding.
|
||||
|
||||
In conclusion, Porter noted: “Like any open source project, we desperately need more developers.” The Automotive Grade Linux project may seem peripheral to some developers, but it offers a nice mix of familiarity — grounded in many widely used open source projects -- along with the excitement of expanding into a new and potentially game changing computing form factor: your automobile. AGL has also demonstrated success — you can now [check out AGL in action in the 2018 Toyota Camry][14], followed in the coming month by most Toyota and Lexus vehicles sold in North America.
|
||||
|
||||
Watch the complete video below:
|
||||
|
||||
[视频][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/event/elce/2017/11/inside-agl-familiar-open-source-components-ease-learning-curve
|
||||
|
||||
作者:[ ERIC BROWN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://www.linux.com/licenses/category/linux-foundation
|
||||
[2]:https://www.youtube.com/playlist?list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
|
||||
[3]:https://www.youtube.com/watch?v=kfwEmjSjAzM&index=14&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
|
||||
[4]:https://www.linux.com/files/images/porter-elce-aglpng
|
||||
[5]:http://events.linuxfoundation.org/events/embedded-linux-conference-europe
|
||||
[6]:https://www.automotivelinux.org/
|
||||
[7]:https://www.linux.com/blog/2017/8/automotive-grade-linux-moves-ucb-40-launches-virtualization-workgroup
|
||||
[8]:http://schaufler-ca.com/
|
||||
[9]:https://wiki.tizen.org/Security:Cynara
|
||||
[10]:https://wiki.maemo.org/Bluetooth_PBAP
|
||||
[11]:https://www.rtl-sdr.com/about-rtl-sdr/
|
||||
[12]:http://www.catb.org/gpsd/
|
||||
[13]:https://www.freedesktop.org/wiki/Software/GeoClue/
|
||||
[14]:https://www.linux.com/blog/event/automotive-linux-summit/2017/6/linux-rolls-out-toyota-and-lexus-vehicles
|
||||
[15]:https://youtu.be/RgI-g5h1t8I
|
@ -1,65 +0,0 @@
|
||||
Reflecting on the GPLv3 license for its 11th anniversary
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_vaguepatent_520x292.png?itok=_zuxUwyt)
|
||||
|
||||
Last year, I missed the opportunity to write about the 10th anniversary of [GPLv3][1], the third version of the GNU General Public License. GPLv3 was officially released by the Free Software Foundation (FSF) on June 29, 2007—better known in technology history as the date Apple launched the iPhone. Now, one year later, I feel some retrospection on GPLv3 is due. For me, much of what is interesting about GPLv3 goes back somewhat further than 11 years, to the public drafting process in which I was an active participant.
|
||||
|
||||
In 2005, following nearly a decade of enthusiastic self-immersion in free software, yet having had little open source legal experience to speak of, I was hired by Eben Moglen to join the Software Freedom Law Center as counsel. SFLC was then outside counsel to the FSF, and my role was conceived as focusing on the incipient public phase of the GPLv3 drafting process. This opportunity rescued me from a previous career turn that I had found rather dissatisfying. Free and open source software (FOSS) legal matters would come to be my new specialty, one that I found fascinating, gratifying, and intellectually rewarding. My work at SFLC, and particularly the trial by fire that was my work on GPLv3, served as my on-the-job training.
|
||||
|
||||
GPLv3 must be understood as the product of an earlier era of FOSS, the contours of which may be difficult for some to imagine today. By the beginning of the public drafting process in 2006, Linux and open source were no longer practically synonymous, as they might have been for casual observers several years earlier, but the connection was much closer than it is now.
|
||||
|
||||
Reflecting the profound impact that Linux was already having on the technology industry, everyone assumed GPL version 2 was the dominant open source licensing model. We were seeing the final shakeout of a Cambrian explosion of open source (and pseudo-open source) business models. A frothy business-fueled hype surrounded open source (for me most memorably typified by the Open Source Business Conference) that bears little resemblance to the present-day embrace of open source development by the software engineering profession. Microsoft, with its expanding patent portfolio and its competitive opposition to Linux, was commonly seen in the FOSS community as an existential threat, and the [SCO litigation][2] had created a cloud of legal risk around Linux and the GPL that had not quite dissipated.
|
||||
|
||||
That environment necessarily made the drafting of GPLv3 a high-stakes affair, unprecedented in free software history. Lawyers at major technology companies and top law firms scrambled for influence over the license, convinced that GPLv3 was bound to take over and thoroughly reshape open source and all its massive associated business investment.
|
||||
|
||||
A similar mindset existed within the technical community; it can be detected in the fears expressed in the final paragraph of the Linux kernel developers' momentous September 2006 [denunciation][3] of GPLv3. Those of us close to the FSF knew a little better, but I think we assumed the new license would be either an overwhelming success or a resounding failure—where "success" meant something approximating an upgrade of the existing GPLv2 project ecosystem to GPLv3, though perhaps without the kernel. The actual outcome was something in the middle.
|
||||
|
||||
I have no confidence in attempts to measure open source license adoption, which have in recent years typically been used to demonstrate a loss of competitive advantage for copyleft licensing. My own experience, which is admittedly distorted by proximity to Linux and my work at Red Hat, suggests that GPLv3 has enjoyed moderate popularity as a license choice for projects launched since 2007, though most GPLv2 projects that existed before 2007, along with their post-2007 offshoots, remained on the old license. (GPLv3's sibling licenses LGPLv3 and AGPLv3 never gained comparable popularity.) Most of the existing GPLv2 projects (with a few notable exceptions like the kernel and Busybox) were licensed as "GPLv2 or any later version." The technical community decided early on that "GPLv2 or later" was a politically neutral license choice that embraced both GPLv2 and GPLv3; this goes some way to explain why adoption of GPLv3 was somewhat gradual and limited, especially within the Linux community.
|
||||
|
||||
During the GPLv3 drafting process, some expressed concerns about a "balkanized" Linux ecosystem, whether because of the overhead of users having to understand two different, strong copyleft licenses or because of GPLv2/GPLv3 incompatibility. These fears turned out to be entirely unfounded. Within mainstream server and workstation Linux stacks, the two licenses have peacefully coexisted for a decade now. This is partly because such stacks are made up of separate units of strong copyleft scope (see my discussion of [related issues in the container setting][4]). As for incompatibility inside units of strong copyleft scope, here, too, the prevalence of "GPLv2 or later" was seen by the technical community as neatly resolving the theoretical problem, despite the fact that nominal license upgrading of GPLv2-or-later to GPLv3 hardly ever occurred.
|
||||
|
||||
I have alluded to the handwringing that some of us FOSS license geeks have brought to the topic of supposed copyleft decline. GPLv3 has taken its share of abuse from critics as far back as the beginning of the public drafting process, and some, predictably, have drawn a link between GPLv3 in particular and GPL or copyleft disfavor in general.
|
||||
|
||||
I have viewed it somewhat differently: Largely because of its complexity and baroqueness, GPLv3 was a lost opportunity to create a strong copyleft license that would appeal very broadly to modern individual software authors and corporate licensors. I believe individual developers today tend to prefer short, simple, easy to understand, minimalist licenses, the most obvious example of which is the [MIT License][5].
|
||||
|
||||
Some corporate decisionmakers around open source license selection may naturally share that view, while others may associate some parts of GPLv3, such as the patent provisions or the anti-lockdown requirements, as too risky or incompatible with their business models. The great irony is that the characteristics of GPLv3 that fail to attract these groups are there in part because of conscious attempts to make the license appeal to these same sorts of interests.
|
||||
|
||||
How did GPLv3 come to be so baroque? As I have said, GPLv3 was the product of an earlier time, in which FOSS licenses were viewed as the primary instruments of project governance. (Today, we tend to associate governance with other kinds of legal or quasi-legal tools, such as structuring of nonprofit organizations, rules around project decision making, codes of conduct, and contributor agreements.)
|
||||
|
||||
GPLv3, in its drafting, was the high point of an optimistic view of FOSS licenses as ambitious means of private regulation. This was already true of GPLv2, but GPLv3 took things further by addressing in detail a number of new policy problems—software patents, anti-circumvention laws, device lockdown. That was bound to make the license longer and more complex than GPLv2, as the FSF and SFLC noted apologetically in the first GPLv3 [rationale document][6].
|
||||
|
||||
But a number of other factors at play in the drafting of GPLv3 unintentionally caused the complexity of the license to grow. Lawyers representing vendors' and commercial users' interests provided useful suggestions for improvements from a legal and commercial perspective, but these often took the form of making simply worded provisions more verbose, arguably without net increases in clarity. Responses to feedback from the technical community, typically identifying loopholes in license provisions, had a similar effect.
|
||||
|
||||
The GPLv3 drafters also famously got entangled in a short-term political crisis—the controversial [Microsoft/Novell deal][7] of 2006—resulting in the permanent addition of new and unusual conditions in the patent section of the license, which arguably served little purpose after 2007 other than to make license compliance harder for conscientious patent-holding vendors. Of course, some of the complexity in GPLv3 was simply the product of well-intended attempts to make compliance easier, especially for community project developers, or to codify FSF interpretive practice. Finally, one can take issue with the style of language used in GPLv3, much of which had a quality of playful parody or mockery of conventional software license legalese; a simpler, straightforward form of phrasing would in many cases have been an improvement.
|
||||
|
||||
The complexity of GPLv3 and the movement towards preferring brevity and simplicity in license drafting and unambitious license policy objectives meant that the substantive text of GPLv3 would have little direct influence on later FOSS legal drafting. But, as I noted with surprise and [delight][8] back in 2012, MPL 2.0 adapted two parts of GPLv3: the 30-day cure and 60-day repose language from the GPLv3 termination provision, and the assurance that downstream upgrading to a later license version adds no new obligations on upstream licensors.
|
||||
|
||||
The GPLv3 cure language has come to have a major impact, particularly over the past year. Following the Software Freedom Conservancy's promulgation, with the FSF's support, of the [Principles of Community-Oriented GPL Enforcement][9], which calls for extending GPLv3 cure opportunities to GPLv2 violations, the Linux Foundation Technical Advisory Board published a [statement][10], endorsed by over a hundred Linux kernel developers, which incorporates verbatim the cure language of GPLv3. This in turn was followed by a Red Hat-led series of [corporate commitments][11] to extend the GPLv3 cure provisions to GPLv2 and LGPLv2.x noncompliance, a campaign to get individual open source developers to extend the same commitment, and an announcement by Red Hat that henceforth GPLv2 and LGPLv2.x projects it leads will use the commitment language directly in project repositories. I discussed these developments in a recent [blog post][12].
|
||||
|
||||
One lasting contribution of GPLv3 concerns changed expectations for how revisions of widely-used FOSS licenses are done. It is no longer acceptable for such licenses to be revised entirely in private, without opportunity for comment from the community and without efforts to consult key stakeholders. The drafting of MPL 2.0 and, more recently, EPL 2.0 reflects this new norm.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/gplv3-anniversary
|
||||
|
||||
作者:[Richard Fontana][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/fontana
|
||||
[1]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[2]:https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes
|
||||
[3]:https://lwn.net/Articles/200422/
|
||||
[4]:https://opensource.com/article/18/1/containers-gpl-and-copyleft
|
||||
[5]:https://opensource.org/licenses/MIT
|
||||
[6]:http://gplv3.fsf.org/gpl-rationale-2006-01-16.html
|
||||
[7]:https://en.wikipedia.org/wiki/Novell#Agreement_with_Microsoft
|
||||
[8]:https://opensource.com/law/12/1/the-new-mpl
|
||||
[9]:https://sfconservancy.org/copyleft-compliance/principles.html
|
||||
[10]:https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html
|
||||
[11]:https://www.redhat.com/en/about/press-releases/technology-industry-leaders-join-forces-increase-predictability-open-source-licensing
|
||||
[12]:https://www.redhat.com/en/blog/gpl-cooperation-commitment-and-red-hat-projects?source=author&term=26851
|
@ -1,87 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco goes all in on WiFi 6)
|
||||
[#]: via: (https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco goes all in on WiFi 6
|
||||
======
|
||||
Cisco rolls out Catalyst and Meraki WiFi 6-based access points, Catalyst 9000 switch
|
||||
![undefined / Getty Images][1]
|
||||
|
||||
Cisco has taken the wraps off a family of WiFi 6 access points, roaming technology and developer-community support all to make wireless a solid enterprise equal with the wired world.
|
||||
|
||||
“Best-effort’ wireless for enterprise customers doesn’t cut it any more. There’s been a change in customer expectations that there will be an uninterrupted unplugged experience,” said Scott Harrell, senior vice president and general manager of enterprise networking at Cisco. **“ **It is now a wireless-first world.** ”**
|
||||
|
||||
**More about 802.11ax (Wi-Fi 6)**
|
||||
|
||||
* [Why 802.11ax is the next big thing in wireless][2]
|
||||
* [FAQ: 802.11ax Wi-Fi][3]
|
||||
* [Wi-Fi 6 (802.11ax) is coming to a router near you][4]
|
||||
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][5]
|
||||
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][6]
|
||||
|
||||
|
||||
|
||||
Bringing a wireless-first enterprise world together is one of the drivers behind a new family of WiFi 6-based access points (AP) for Cisco’s Catalyst and Meraki portfolios. WiFi 6 (802.11ax) is designed for high-density public or private environments. But it also will be beneficial in internet of things (IoT) deployments, and in offices that use bandwidth-hogging applications like videoconferencing.
|
||||
|
||||
The Cisco Catalyst 9100 family and Meraki [MR 45/55][7] WiFi-6 access points are built on Cisco silicon and communicate via pre-802.1ax protocols. The silicon in these access points now acts a rich sensor providing IT with insights about what is going on the wireless network in real-time, and that enables faster reactions to problems and security concerns, Harrell said.
|
||||
|
||||
Aside from WiFi 6, the boxes include support for visibility and communications with Zigbee, BLE and Thread protocols. The Catalyst APs support uplink speeds of 2.5 Gbps, in addition to 100 Mbps and 1 Gbps. All speeds are supported on Category 5e cabling for an industry first, as well as 10GBASE-T (IEEE 802.3bz) cabling, Cisco said.
|
||||
|
||||
Wireless traffic aggregates to wired networks so and the wired network must also evolve. Technology like multi-gigabit Ethernet must be driven into the access layer, which in turn drives higher bandwidth needs at the aggregation and core layers, [Harrell said][8].
|
||||
|
||||
Handling this influx of wireless traffic was part of the reason Cisco also upgraded its iconic Catalyst 6000 with the [Catalyst 9600 this week][9]. The 9600 brings with it support for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for wireless netowrks as well as Intent-based networking (IBN) and security segmentation. The 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the 9500 aggregation switch and 9800 wireless controller.
|
||||
|
||||
“WiFi doesn’t exist in a vacuum – how it connects to the enterprise and the data center or the Internet is key and in Cisco’s case that key is now the 9600 which has been built to handle the increased traffic,” said Lee Doyle, principal analyst with Doyle Research.
|
||||
|
||||
The new 9600 ties in with the recently [released Catalyst 9800][10], which features 40Gbps to 100Gbps performance, depending on the model, hot-patching to simplify updates and eliminate update-related downtime, Encrypted Traffic Analytics (ETA), policy-based micro- and macro-segmentation and Trustworthy solutions to detect malware on wired or wireless connected devices, Cisco said.
|
||||
|
||||
All Catalyst 9000 family members support other Cisco products such as [DNA Center][11] , which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise wired and wireless networks.
|
||||
|
||||
The new APs are pre-standard, but other vendors including Aruba, NetGear and others are also selling pre-standard 802.11ax devices. Cisco getting into the market solidifies the validity of this strategy, said Brandon Butler, a senior research analyst with IDC.
|
||||
|
||||
Many experts [expect the standard][12] to be ratified late this year.
|
||||
|
||||
“We expect to see volume shipments of WiFi 6 products by early next year and it being the de facto WiFi standard by 2022.”
|
||||
|
||||
On top of the APs and 9600 switch, Cisco extended its software development community – [DevNet][13] – to offer WiFi 6 learning labs, sandboxes and developer resources.
|
||||
|
||||
The Cisco Catalyst and Meraki access platforms are open and programmable all the way down to the chipset level, allowing applications to take advantage of network programmability, Cisco said.
|
||||
|
||||
Cisco also said it had added more vendors to now include Apple, Samsung, Boingo, Presidio and Intel for its ongoing [OpenRoaming][14] project. OpenRoaming, which is in beta promises to let users move seamlessly between wireless networks and LTE without interruption.
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/cisco_catalyst_wifi_coffee-cup_coffee-beans_-100794990-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
|
||||
[3]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
|
||||
[4]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
|
||||
[5]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
|
||||
[6]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
|
||||
[7]: https://meraki.cisco.com/lib/pdf/meraki_datasheet_MR55.pdf
|
||||
[8]: https://blogs.cisco.com/news/unplugged-and-uninterrupted
|
||||
[9]: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html
|
||||
[10]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
|
||||
[11]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
|
||||
[12]: https://www.networkworld.com/article/3336263/is-jumping-ahead-to-wi-fi-6-the-right-move.html
|
||||
[13]: https://developer.cisco.com/wireless/?utm_campaign=colaunch-wireless19&utm_source=pressrelease&utm_medium=ciscopress-wireless-main
|
||||
[14]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/802-11ax-solution/openroaming.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -1,162 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (HPE’s CEO lays out his technology vision)
|
||||
[#]: via: (https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html)
|
||||
[#]: author: (Eric Knorr )
|
||||
|
||||
HPE’s CEO lays out his technology vision
|
||||
======
|
||||
In an exclusive interview, HPE CEO Antonio Neri unpacks his portfolio of technology initiatives, from edge computing to tomorrow’s memory-driven architecture.
|
||||
![HPE][1]
|
||||
|
||||
Like Microsoft's Satya Nadella, HPE CEO Antonio Neri is a technologist with a long history of leading initiatives in his company. Meg Whitman, his former boss at HPE, showed her appreciation of Neri’s acumen by promoting him to HPE Executive Vice President in 2015 – and gave him the green light to acquire [Aruba][2], [SimpliVity][3], [Nimble Storage][4], and [Plexxi][5], all of which added key items to HPE’s portfolio.
|
||||
|
||||
Neri succeeded Whitman as CEO just 16 months ago. In a recent interview with Network World, Neri’s engineering background was on full display as he explained HPE’s technology roadmap. First and foremost, he sees a huge opportunity in [edge computing][6], into which HPE is investing $4 billion over four years to further develop edge “connectivity, security, and obviously cloud and analytics.”
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][7]
|
||||
* [Edge computing best practices][8]
|
||||
* [How edge computing can help secure the IoT][9]
|
||||
|
||||
|
||||
|
||||
Although his company abandoned its public cloud efforts in 2015, Neri is also bullish on the self-service “cloud experience,” which he asserts HPE is already implementing on-prem today in a software-defined, consumption-driven model. More fundamentally, he believes we are on the brink of a memory-driven computing revolution, where storage and memory become one and, depending on the use case, various compute engines are brought to bear on zettabytes of data.
|
||||
|
||||
This interview, conducted by Network World Editor-in-Chief Eric Knorr and edited for length and clarity, digs into Neri’s technology vision. [A companion interview on CIO][10] centers on Neri’s views of innovation, management, and company culture.
|
||||
|
||||
**Eric Knorr: ** Your biggest and highest profile investment so far has been in edge computing. My understanding of edge computing is that we’re really talking about mini-data centers, defined by IDC as less than 100 square feet in size. What’s the need for a $4 billion investment in that?
|
||||
|
||||
**Antonio Neri:** It’s twofold. We focus first on connectivity. Think about Aruba as a platform company, a cloud-enabled company. Now we offer branch solutions and edge data center solutions that include [wireless][11], LAN, [WAN][12] connectivity and soon [5G][13]. We give you a control plane so that that connectivity experience can be seen consistently the same way. All the policy management, the provisioning and the security aspects of it.
|
||||
|
||||
**Knorr:** Is 5G a big focus?
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][14] ]**
|
||||
|
||||
**Neri:** It’s a big focus for us. What customers are telling us is that it’s hard to get 5G inside the building. How you do hand off between 5G and Wi-Fi and give them the same experience? Because the problem is that we have LAN, wireless, and WAN already fully integrated into the control plane, but 5G sits over here. If you are an enterprise, you have to manage these two pipes independently.
|
||||
|
||||
With the new spectrum, though, they are kind of comingling anyway. [Customers ask] why don’t you give me [a unified] experience on top of that, with all this policy management and cloud-enablement, so I can provision the right connectivity for the right use case? A sensor can use a lower radio access or [Bluetooth][15] or other type of connectivity because you don’t need persistent connectivity and you don’t have the power to do it.
|
||||
|
||||
In some cases, you just put a SIM on it, and you have 5G, but in another one it’s just wireless connectivity. Wi-Fi connectivity is significantly lower cost than 5G. The use cases will dictate what type of connectivity you need, but the reality is they all want one experience. And we can do that because we have a great platform and a great partnership with MSPs, telcos, and providers.
|
||||
|
||||
**Knorr:** So it sounds like much of your investment is going into that integration.
|
||||
|
||||
**Neri:** The other part is how we provide the ability to provision the right cloud computing at the edge for the right use cases. Think about, for example, a manufacturing floor. We can converge the OT and IT worlds through a converged infrastructure aspect that digitizes the analog process into a digital process. We bring the cloud compute in there, which is fully virtualized and containerized, we integrate Wi-Fi connectivity or LAN connectivity, and we eliminate all these analog processes that are multi-failure touchpoints because you have multiple things that have to come together.
|
||||
|
||||
That’s a great example of a cloud at the edge. And maybe that small cloud is connected to a big cloud which could be in the large data center, which the customer owns – or it can be one of the largest public cloud providers.
|
||||
|
||||
**Knorr:** It’s difficult to talk about the software-defined data center and private cloud without talking about [VMware][16]. Where do your software-defined solutions leave off and where does VMware begin?
|
||||
|
||||
**Neri:** Where we stop is everything below the hypervisor, including the software-defined storage and things like SimpliVity. That has been the advantage we’ve had with [HPE OneView][17], so we can provision and manage the infrastructure-life-cycle and software-defined aspects at the infrastructure level. And let’s not forget security, because we’ve integrated [silicon root of trust][18] into our systems, which is a good advantage for us in the government space.
|
||||
|
||||
Then above that we continue to develop capabilities. Customers want choice. That’s why [the partnership with Nutanix][19] was important. We offer an alternative to vSphere and vCloud Foundation with Nutanix Prism and Acropolis.
|
||||
|
||||
**Knorr:** VMware has become the default for the private cloud, though.
|
||||
|
||||
**Neri:** Obviously, VMware owns 60 percent of the on-prem virtualized environment, but more and more, containers are becoming the way to go in a cloud-native approach. For us, we own the full container stack, because we base our solution on Kubernetes. We deploy that. That’s why the partnership with Nutanix is important. With Nutanix, we offer KVM and the Prism stack and then we’re fully integrated with HPE OneView for the rest of the infrastructure.
|
||||
|
||||
**Knorr:** You also offer GKE [Google [Kubernetes][20] Engine] on-prem.
|
||||
|
||||
**Neri:** Correct. We’re working with Google on the next version of that.
|
||||
|
||||
**Knorr:** How long do you think it will be before you start seeing Kubernetes and containers on bare metal?
|
||||
|
||||
**Neri:** It’s an interesting question. Many customers tell us it’s like going back to the future. It’s like we’re paying this tax on the virtualization layer.
|
||||
|
||||
**Knorr:** Exactly.
|
||||
|
||||
**Neri:** I can go bare metal and containers and be way more efficient. It is a little bit back to the future. But it’s a different future.
|
||||
|
||||
**Knorr:** And it makes the promise of [hybrid cloud][21] a little more real. I know HPE has been very bullish on hybrid.
|
||||
|
||||
**Neri:** We have been the one to say the world would be hybrid.
|
||||
|
||||
**Knorr:** But today, how hybrid is hybrid really? I mean, you have workloads in the public cloud, you have workloads in a [private cloud][22]. Can you really rope it all together into hybrid?
|
||||
|
||||
**Neri:** I think you have to have portability eventually.
|
||||
|
||||
**Knorr:** Eventually. It’s not really true now, though.
|
||||
|
||||
**Neri:** No, not true now. If you look at it from the software brokering perspective that makes hybrid very small. We know this eventually has to be all connected, but it’s not there yet. More and more of these workloads have to go back and forth.
|
||||
|
||||
If you ask me what the CIO role of the future will look like, it would be a service provider. I wake up in the morning, have a screen that says – oh, you know what? Today it’s cheaper to run that app here. I just slice it there and then it just moves. Whatever attributes on the data I want to manage and so forth – oh, today I have capacity here and by the way, why are you not using it? Slide it back here. That’s the hybrid world.
|
||||
|
||||
Many people, when they started with the cloud, thought, “I’ll just virtualize everything,” but that’s not the cloud. You’re [virtualizing][23], but you have to make it self-service. Obviously, cloud-native applications have developed that are different today. That’s why containers are definitely a much more efficient way, and that’s why I agree that the bare-metal piece of this is coming back.
|
||||
|
||||
**Knorr:** Do you worry about public cloud incursions into the [data center][24]?
|
||||
|
||||
**Neri:** It’s happening. Of course I’m worried. But what at least gives me comfort is twofold. One is that the customer wants choice. They don’t want to be locked in. Service is important. It’s one thing to say: Here’s the system. The other is: Who’s going to maintain it for me? Who is going to run it for me? And even though you have all the automation tools in the world, somebody has to watch this thing. Our job is to bring the public-cloud experience on prem, so that the customer has that choice.
|
||||
|
||||
**Knorr:** Part of that is economics.
|
||||
|
||||
**Neri:** When you look at economics it’s no longer just the cost of compute anymore. What we see more and more is the cost of the data bandwidth back and forth. That’s why the first question a customer asks is: Where should I put my data? And that dictates a lot of things, because today the data transfer bill is way higher than the cost of renting a VM.
|
||||
|
||||
The other thing is that when you go on the public cloud you can spin up a VM, but the problem is if you don’t shut it off, the bill keeps going. We brought, in the context of [composability][25], the ability to shut it off automatically. That’s why composability is important, because we can run, first of all, multi-workloads in the same infrastructure – whether it’s bare metal, virtualized or containerized. It’s called composable because the software layers of intelligence compose the right solutions from compute, storage, fabric and memory to that workload. When it doesn’t need it, it gives it back.
|
||||
|
||||
**Knorr:** Is there any opportunity left at the hardware level to innovate?
|
||||
|
||||
**Neri:** That’s why we think about memory-driven computing. Today we have a very CPU-centric approach. This is a limiting factor, and the reality is, if you believe data is the core of the architecture going forward, then the CPU can’t be the core of the architecture anymore.
|
||||
|
||||
You have a bunch of inefficiency by moving data back and forth across the system, which also creates energy waste and so forth. What we are doing is basically rearchitecting this for once in 70 years. We take memory and storage and collapse the two into one, so this becomes one central pool, which is nonvolatile and becomes the core. And then we bring the right computing capability to the data.
|
||||
|
||||
In an AI use case, you don’t move the data. You bring accelerators or GPUs to the data. For general purpose, you may use an X86, and maybe in video transcoding, you use an ARM-based architecture. The magic is this: You can do this on zettabytes of data and the benefit is there is no waste, very little power to keep it alive, and it’s persistent.
|
||||
|
||||
We call this the Generation Z fabric, which is based on a data fabric and silicon photonics. Now we go from copper, which is generating a lot of waste and a lot of heat and energy, to silicon photonics. So we not only scale this to zettabytes, we can do massive amounts of computation by bringing the right compute at the speed that’s needed to the data – and we solve a cost and scale problem too, because copper today costs a significant amount of money, and gold-plated connectors are hundreds of dollars.
|
||||
|
||||
We’re going to actually implement this capability in silicon photonics in our current architectures by the end of the year. In Synergy, for example, which is a composable blade system, at the back of the rack you can swap from Ethernet to silicon photonics. It was designed that way. We already prototyped this in a simple 2U chassis with 160 TB of memory and 2000 cores. We were able to process a billion-record database with 55 million combinations of algorithms in less than a minute.
|
||||
|
||||
**Knorr:** So you’re not just focusing on the edge, but the core, too.
|
||||
|
||||
**Neri:** As you go down from the cloud to the edge, that architecture actually scales to the smallest things. You can do it on a massive scale or you can do it on a small scale. We will deploy these technologies in our systems architectures now. Once the whole ecosystem is developed, because we also need an ISV ecosystem that can code applications in this new world or you’re not taking advantage of it. Also, the current Linux kernel can only handle so much memory, so you have to rewrite the kernel. We are working with two universities to do that.
|
||||
|
||||
The hardware will continue to evolve and develop, but there still is a lot of innovation that has to happen. What’s holding us back, honestly, is the software.
|
||||
|
||||
**Knorr:** And that’s where a lot of your investment is going?
|
||||
|
||||
**Neri:** Correct. Exactly right. Systems software, not application software. It’s the system software that makes this infrastructure solution-oriented, workload-optimized, autonomous and efficient.
|
||||
|
||||
Join the Network World communities on [Facebook][26] and [LinkedIn][27] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html
|
||||
|
||||
作者:[Eric Knorr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/antonio-neri_hpe_new-100796112-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2891130/aruba-networks-is-different-than-hps-failed-wireless-acquisitions.html
|
||||
[3]: https://www.networkworld.com/article/3158784/hpe-buying-simplivity-for-650-million-to-boost-hyperconvergence.html
|
||||
[4]: https://www.networkworld.com/article/3177376/hpe-to-pay-1-billion-for-nimble-storage-after-cutting-emc-ties.html
|
||||
[5]: https://www.networkworld.com/article/3273113/hpe-snaps-up-hyperconverged-network-hcn-vendor-plexxi.html
|
||||
[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[7]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[10]: https://www.cio.com/article/3394598/hpe-ceo-antonio-neri-rearchitects-for-the-future.html
|
||||
[11]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
|
||||
[12]: https://www.networkworld.com/article/3248989/what-is-a-wide-area-network-a-definition-examples-and-where-wans-are-headed.html
|
||||
[13]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[14]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[15]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html
|
||||
[16]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html
|
||||
[17]: https://www.networkworld.com/article/2174203/hp-expands-oneview-into-vmware-environs.html
|
||||
[18]: https://www.networkworld.com/article/3199826/hpe-highlights-innovation-in-software-defined-it-security-at-discover.html
|
||||
[19]: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html
|
||||
[20]: https://www.infoworld.com/article/3268073/what-is-kubernetes-container-orchestration-explained.html
|
||||
[21]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
|
||||
[22]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
|
||||
[23]: https://www.networkworld.com/article/3285906/whats-the-future-of-server-virtualization.html
|
||||
[24]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[25]: https://www.networkworld.com/article/3266106/what-is-composable-infrastructure.html
|
||||
[26]: https://www.facebook.com/NetworkWorld/
|
||||
[27]: https://www.linkedin.com/company/network-world
|
@ -1,77 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services)
|
||||
[#]: via: (https://www.networkworld.com/article/3395776/ibm-overhauls-mainframe-software-pricing-adds-hybrid-private-cloud-services.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
IBM overhauls mainframe-software pricing, adds hybrid, private-cloud services
|
||||
======
|
||||
IBM brings cloud consumption model to the mainframe, adds Docker container extensions
|
||||
![Thinkstock][1]
|
||||
|
||||
IBM continues to adopt new tools and practices for its mainframe customers to keep the Big Iron relevant in a cloud world.
|
||||
|
||||
First of all, the company switched-up its 20-year mainframe software pricing scheme to make it more palatable to hybrid and multicloud users who might be thinking of moving workloads off the mainframe and into the cloud.
|
||||
|
||||
**[ Check out[What is hybrid cloud computing][2] and learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
|
||||
|
||||
Specifically IBM rolled out Tailored Fit Pricing for the IBM Z mainframe which offers two consumption-based pricing models that can help customers cope with ever-changing workload – and hence software – costs.
|
||||
|
||||
Tailored Fit Pricing removes the need for complex and restrictive capping, which typically weakens responsiveness and can impact service level availability, IBM said. IBM’s standard monthly mainframe licensing model calculates costs as a “rolling four-hour average” (R4HA) which would determine cost based on a customer’s peak usage during the month. Customers would many time cap usage to keep costs down, experts said
|
||||
|
||||
Systems can now be configured to support optimal response times and service level agreements, rather than artificially slowing down workloads to manage software licensing costs, IBM stated.
|
||||
|
||||
Predicting demand for IT services can be a major challenge and in the era of hybrid and multicloud, everything is connected and workload patterns constantly change, wrote IBM’s Ross Mauri, General Manager, IBM Z in a [blog][5] about the new pricing and services. “In this environment, managing demand for IT services can be a major challenge. As more customers shift to an enterprise IT model that incorporates on-premises, private cloud and public we’ve developed a simple cloud pricing model to drive the transformation forward.”
|
||||
|
||||
[Tailored Fit Pricing][6] for IBM Z comes in two flavors, the Enterprise Consumption Solution and the Enterprise Capacity Solution.
|
||||
|
||||
IBM said the Enterprise Consumption model is a tailored usage-based pricing model, where customers pay only for what they use, removing the need for complex and restrictive capping, IBM said.
|
||||
|
||||
The Enterprise Capacity model lets customers mix and match workloads to help maximize use of the full capacity of the platform. Charges are referenced to the overall size of the physical environment and are calculated based on the estimated mix of workloads running, while providing the flexibility to vary actual usage across workloads, IBM said.
|
||||
|
||||
The software pricing changes should be a welcome benefit to customers, experts said.
|
||||
|
||||
“By making access to Z mainframes more flexible and ‘cloud-like,’ IBM is making it less likely that customers will consider shifting Z workloads to other systems and environments. As cloud providers become increasingly able to support mission critical applications, that’s a big deal,” wrote Charles King, president and principal analyst for Pund-IT in a [blog][7] about the IBM changes.
|
||||
|
||||
“A notable point about both models is that discounted growth pricing is offered on all workloads – whether they be 40-year old Assembler programs or 4-day old JavaScript apps. This is in contrast to previous models which primarily rewarded only brand-new applications with growth pricing. By thinking outside the Big Iron box, the company has substantially eased the pain for its largest clients’ biggest mainframe-related headaches,” King wrote.
|
||||
|
||||
IBM’s Tailored Fit Pricing supports an increasing number of enterprises that want to continue to grow and build new services on top of this mission-critical platform, wrote [John McKenny][8] vice president of strategy for ZSolutions Optimization at BMC Software. “In not yet released results from the 2019 BMC State of the Mainframe Survey, 62% of the survey respondents reported that they are planning to expand MIPS/MSU consumption and are growing their mainframe workloads. For customers with no current plans for growth, the affordability and cost-competitiveness of the new pricing model will re-ignite interest in also using this platform as an integral part of their hybrid cloud strategies.”
|
||||
|
||||
In addition to the pricing, IBM announced some new services that bring the mainframe closer to cloud workloads.
|
||||
|
||||
First, IBM rolled out z/OS Container Extensions (zCX), which makes it possible to run Linux on Z applications that are packaged as Docker Container images on z/OS. Application developers can develop and data centers can operate popular open source packages, Linux applications, IBM software, and third-party software together with z/OS applications and data, IBM said. zCX will let customers use the latest open source tools, popular NoSQL databases, analytics frameworks, application servers, and so on within the z/OS environment.
|
||||
|
||||
“With z/OS Container Extensions, customers will be able to access the most recent development tools and processes available in Linux on the Z ecosystem, giving developers the flexibility to build new, cloud-native containerized apps and deploy them on z/OS without requiring Linux or a Linux partition,” IBM’s Mauri stated.
|
||||
|
||||
Big Blue also rolled out z/OS Cloud Broker which will let customers access and deploy z/OS resources and services on [IBM Cloud Private][9]. [IBM Cloud Private][10] is the company’s Kubernetes-based Platform as a Service (PaaS) environment for developing and managing containerized applications. IBM said z/OS Cloud Broker is designed to help cloud application developers more easily provision and deprovision apps in z/OS environments.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3395776/ibm-overhauls-mainframe-software-pricing-adds-hybrid-private-cloud-services.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/08/thinkstockphotos-520137237-100610459-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.ibm.com/blogs/systems/ibm-z-defines-the-future-of-hybrid-cloud/
|
||||
[6]: https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS219-014&appname=USN
|
||||
[7]: https://www.pund-it.com/blog/ibm-reinvents-the-z-mainframe-again/
|
||||
[8]: https://www.bmc.com/blogs/bmc-supports-ibm-tailored-fit-pricing-ibm-z/
|
||||
[9]: https://www.ibm.com/marketplace/cloud-private-on-z-and-linuxone
|
||||
[10]: https://www.networkworld.com/article/3340043/ibm-marries-on-premises-private-and-public-cloud-data.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,68 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (HPE to buy Cray, offer HPC as a service)
|
||||
[#]: via: (https://www.networkworld.com/article/3396220/hpe-to-buy-cray-offer-hpc-as-a-service.html)
|
||||
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
|
||||
|
||||
HPE to buy Cray, offer HPC as a service
|
||||
======
|
||||
High-performance computing offerings from HPE plus Cray could enable things like AI, ML, high-speed financial trading, creation digital twins for entire enterprise networks.
|
||||
![Cray Inc.][1]
|
||||
|
||||
HPE has agreed to buy supercomputer-maker Cray for $1.3 billion, a deal that the companies say will bring their corporate customers high-performance computing as a service to help with analytics needed for artificial intelligence and machine learning, but also products supporting high-performance storage, compute and software.
|
||||
|
||||
In addition to bringing HPC capabilities that can blend with and expand HPE’s current products, Cray brings with it customers in government and academia that might be interested in HPE’s existing portfolio as well.
|
||||
|
||||
**[ Now read:[Who's developing quantum computers][2] ]**
|
||||
|
||||
The companies say they expect to close the cash deal by the end of next April.
|
||||
|
||||
The HPC-as-a-service would be offered through [HPE GreenLake][3], the company’s public-, private-, hybrid-cloud service. Such a service could address periodic enterprise need for fast computing that might otherwise be too expensive, says Tim Zimmerman, an analyst with Gartner.
|
||||
|
||||
Businesses could use the service, for example, to create [digital twins][4] of their entire networks and use them to test new code to see how it will impact the network before deploying it live, Zimmerman says.
|
||||
|
||||
Cray has HPC technology that HPE Labs might be exploring on its own, but that can be brought to market in a much quicker timeframe.
|
||||
|
||||
HPE says that overall, buying cray give it technologies needed for massively data-intensive workloads such as AI and ML that is used for engineering services, transaction-based trading by financial firms, pharmaceutical research and academic studies into weather and genomes, for instance, Zimmerman says.
|
||||
|
||||
As HPE puts it, Cray supercomputing platforms “have the ability to handle massive data sets, converged modelling, simulation, AI and analytics workloads.”
|
||||
|
||||
Cray is working on [what it says will be the world’s fastest supercomputer][5] when it’s finished in 2021, cranking out 1.5 exaflops. The current fastest supercomputer is 143.5 petaflops. [Click [here][6] to see the current top 10 fastest supercomputers.]
|
||||
|
||||
In general, HPE says it hopes to create a comprehensive line of products to support HPC infrastructure including “compute, high-performance storage, system interconnects, software and services.”
|
||||
|
||||
Together, the talent in the two companies and their combined technologies should be able to increase innovation, HPE says.
|
||||
|
||||
Earlier this month, HPE’s CEO Antonio Neri said in [an interview with _Network World_][7] that the company will be investing $4 billion over four years in a range of technology to boost “connectivity, security, and obviously cloud and analytics.” In laying out the company’s roadmap he made no specific mention of HPC.
|
||||
|
||||
HPE net revenues last fiscal year were $30.9 billion. Cray’s total revenue was $456 million, with a gross profit of $130 million.
|
||||
|
||||
The acquisition will pay $35 per share for Cray stock.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3396220/hpe-to-buy-cray-offer-hpc-as-a-service.html
|
||||
|
||||
作者:[Tim Greene][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Tim-Greene/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/the_cray_xc30_piz_daint_system_at_the_swiss_national_supercomputing_centre_via_cray_inc_3x2_978x652-100762113-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[3]: https://www.networkworld.com/article/3280996/hpe-adds-greenlake-hybrid-cloud-to-enterprise-service-offerings.html
|
||||
[4]: https://www.networkworld.com/article/3280225/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[5]: https://www.networkworld.com/article/3373539/doe-plans-worlds-fastest-supercomputer.html
|
||||
[6]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[7]: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "acyanbird "
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
@ -1,623 +0,0 @@
|
||||
A Word from The Beegoist – Richard Kenneth Eng – Medium
|
||||
======
|
||||
I like the [Go programming language][22]. I sought to use Go to write web applications. To this end, I examined two of the “full stack” web frameworks available to Go developers (aka “Gophers”): [Beego][23] and [Revel][24].
|
||||
|
||||
The reason I looked for full stack was because of my prior experience with [web2py][25], a Python-based framework with extraordinary capability that was also [deliciously easy to get started and be highly productive in][26]. (I also cut my teeth on Smalltalk-based [Seaside][27], which has the same qualities.) In my opinion, full stack is the only way to go because developers should not waste time and effort on the minutiae of tool configuration and setup. The focus should be almost entirely on writing your application.
|
||||
|
||||
Between Beego and Revel, I chose the former. It seemed to be more mature and better documented. It also had a built-in [ORM][28].
|
||||
|
||||
To be sure, Beego isn’t as easy and productive as web2py, but I believe in Go, so it is worth the effort to give Beego my best shot. To get started with Beego, I needed a project, a useful exercise that covered all the bases, such as database management, CSS styling, email capability, form validation, etc., and also provided a useful end product.
|
||||
|
||||
The project I selected was a user account management component for web applications. All of my previous applications required user registration/login, and Beego did not appear to have anything like that available.
|
||||
|
||||
Now that I’ve completed the project, I believe it would be an excellent foundation for a Beego tutorial. I do not pretend that the code is optimal, nor do I pretend that it is bug-free, but if there are any bugs, it would be a good exercise for a novice to resolve them.
|
||||
|
||||
The inspiration for this tutorial arose from my failure to find good, thorough tutorials when I first started learning Beego. There is one 2-part tutorial that is often mentioned, but I found Part 2 sorely lacking. Throwing source code at you for you to figure out on your own is no way to teach. Thus, I wanted to offer my take on a tutorial. Only history will determine whether it was successful.
|
||||
|
||||
So, without further ado, let’s begin. The word is “Go!”
|
||||
|
||||
### Basic Assumptions
|
||||
|
||||
You have some familiarity with the Go language. I highly recommend you follow this [Go tutorial][1].
|
||||
|
||||
You’ve installed [Go][2] and [Beego][3] on your computer. There are plenty of good online resources to help you here (for [example][4]). It’s really quite easy.
|
||||
|
||||
You have basic knowledge of CSS, HTML, and databases. You have at least one database package installed on your computer such as [MySQL][5] (Community Edition) or [SQLite][6]. I have SQLite because it’s much easier to use.
|
||||
|
||||
You have some experience writing software; basic skills are assumed. If you studied computer programming in school, then you’re off to a good start.
|
||||
|
||||
You will be using your favourite programming editor in conjunction with the command line. I use [LiteIDE][7] (on the Mac), but I can suggest alternatives such as [TextMate][8] for the Mac, [Notepad++][9] for Windows, and [vim][10] for Linux.
|
||||
|
||||
These basic assumptions define the target audience for the tutorial. If you’re a programming veteran, though, you’ll breeze through it and hopefully gain much useful knowledge, as well.
|
||||
|
||||
### Creating the Project
|
||||
|
||||
First, we must create a Beego project. We’ll call it ‘[ACME][11]’. From the command line, change directory (cd) to $GOPATH/src and enter:
|
||||
```
|
||||
$ bee new acme
|
||||
|
||||
```
|
||||
|
||||
The following directory structure will be created:
|
||||
```
|
||||
acme
|
||||
....conf
|
||||
....controllers
|
||||
....models
|
||||
....routers
|
||||
....static
|
||||
........css
|
||||
........img
|
||||
........js
|
||||
....tests
|
||||
....views
|
||||
|
||||
```
|
||||
|
||||
Note that Beego is a MVC framework (Model/View/Controller), which means that your application will be separated into three general sections. Model refers to the internal database structure of your application. View is all about how your application looks on the computer screen; in our case, this includes HTML and CSS code. And Controller is where you have your business logic and user interactions.
|
||||
|
||||
You can immediately compile and run your application by changing directory (cd acme) and typing:
|
||||
```
|
||||
$ bee run
|
||||
|
||||
```
|
||||
|
||||
In your browser, go to <http://localhost:8080> to see the running application. It doesn’t do anything fancy right now; it simply greets you. But upon this foundation, we shall raise an impressive edifice.
|
||||
|
||||
### The Source Code
|
||||
|
||||
To follow along, you may [download the source code][12] for this tutorial. Cd to $GOPATH/src and unzip the file. [When you download the source, the filename that Github uses is ‘acme-master’. You must change it to ‘acme’.]
|
||||
|
||||
### Program Design
|
||||
|
||||
The user account management component provides the following functionality:
|
||||
|
||||
1. User registration (account creation)
|
||||
2. Account verification (via email)
|
||||
3. Login (create a session)
|
||||
4. Logout (delete the session)
|
||||
5. User profile (can change name, email, or password)
|
||||
6. Remove user account
|
||||
|
||||
|
||||
|
||||
The essence of a web application is the mapping of URLs (webpages) to the server functions that will process the HTTP requests. This mapping is what generates the work flow in the application. In Beego, the mapping is defined within the ‘router’. Here’s the code for our router (look at router.go in the ‘routers’ directory):
|
||||
```
|
||||
beego.Router("/home", &controllers.MainController{})
|
||||
beego.Router("/user/login/:back", &controllers.MainController{}, "get,post:Login")
|
||||
beego.Router("/user/logout", &controllers.MainController{}, "get:Logout")
|
||||
beego.Router("/user/register", &controllers.MainController{}, "get,post:Register")
|
||||
beego.Router("/user/profile", &controllers.MainController{}, "get,post:Profile")
|
||||
beego.Router("/user/verify/:uuid({[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}})", &controllers.MainController{}, "get:Verify")
|
||||
beego.Router("/user/remove", &controllers.MainController{}, "get,post:Remove")
|
||||
beego.Router("/notice", &controllers.MainController{}, "get:Notice")
|
||||
|
||||
```
|
||||
|
||||
For example, in the line for ‘login’, “get,post:Login” says that both the GET and POST operations are handled by the ‘Login’ function. The ‘:back’ is a request parameter; in this case, it tells us what page to return to after successful login.
|
||||
|
||||
In the line for ‘verify’, the ‘:uuid’ is a request parameter that must match the [regular expression][13] for a Version 4 UUID. The GET operation is handled by the ‘Verify’ function.
|
||||
|
||||
More on this when we talk about controllers.
|
||||
|
||||
Note that I’ve added ‘/home’ to the first line in the router (it was originally ‘/’). This makes it convenient to go to the home page, which we often do in our application.
|
||||
|
||||
### Model
|
||||
|
||||
The database model for a user account is represented by the following struct:
|
||||
```
|
||||
package models
|
||||
|
||||
```
|
||||
```
|
||||
import (
|
||||
"github.com/astaxie/beego/orm"
|
||||
"time"
|
||||
)
|
||||
|
||||
```
|
||||
```
|
||||
type AuthUser struct {
|
||||
Id int
|
||||
First string
|
||||
Last string
|
||||
Email string `orm:"unique"`
|
||||
Password string
|
||||
Reg_key string
|
||||
Reg_date time.Time `orm:"auto_now_add;type(datetime)"`
|
||||
}
|
||||
|
||||
```
|
||||
```
|
||||
func init() {
|
||||
orm.RegisterModel(new(AuthUser))
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Place this in models.go in the ‘models’ directory. Ignore the init() for the time being.
|
||||
|
||||
‘Id’ is the primary key which is auto-incremented in the database. We also have ‘First’ and ‘Last’ names. ‘Password’ contains the hexadecimal representation of the [PBKDF2 hash][14] of the plaintext password.
|
||||
|
||||
‘Reg_key’ contains the [UUID][15] string that is used for account verification (via email). ‘Reg_date’ is the timestamp indicating the time of registration.
|
||||
|
||||
The funny-looking string literals associated with both ‘Email’ and ‘Reg_date’ are used to tell the database the special requirements of these fields. ‘Email’ must be a unique key. ‘Reg_date’ will be automatically assigned the date and time of database insertion.
|
||||
|
||||
By the way, don’t be scared of the PBKDF2 and UUID references. PBKDF2 is simply a way to securely store a user’s password in the database. A UUID is a unique identifier that can be used to ensure the identity of the user for verification purposes.
|
||||
|
||||
### View
|
||||
|
||||
For our CSS template design, I’ve chosen the [Stardust][16] theme (pictured at the start of this article). We will use its index.html as a basis for the view layout.
|
||||
|
||||
Place the appropriate files from the Stardust theme into the ‘css’ and ‘img’ directories of ‘static’ directory. The link statement in the header of index.html must be amended to:
|
||||
```
|
||||
<link href="/static/css/default.css" rel="stylesheet" type="text/css" />
|
||||
|
||||
```
|
||||
|
||||
And all references to image gifs and jpegs in index.html and default.css must point to ‘/static/img/’.
|
||||
|
||||
The view layout contains a header section, a footer section, a sidebar section, and the central section where most of the action will take place. We will be using Go’s templating facility which allows us to replace embedded codes, signified by ‘{{‘ and ‘}}’, with actual HTML. Here’s our basic-layout.tpl (.tpl for ‘template’):
|
||||
```
|
||||
{{.Header}}
|
||||
{{.LayoutContent}}
|
||||
{{.Sidebar}}
|
||||
{{.Footer}}
|
||||
|
||||
```
|
||||
|
||||
Since every webpage in our application will need to adhere to this basic layout, we need a common method to set it up (look at default.go):
|
||||
```
|
||||
func (this *MainController) activeContent(view string) {
|
||||
this.Layout = "basic-layout.tpl"
|
||||
this.LayoutSections = make(map[string]string)
|
||||
this.LayoutSections["Header"] = "header.tpl"
|
||||
this.LayoutSections["Sidebar"] = "sidebar.tpl"
|
||||
this.LayoutSections["Footer"] = "footer.tpl"
|
||||
this.TplNames = view + ".tpl"
|
||||
|
||||
```
|
||||
```
|
||||
sess := this.GetSession("acme")
|
||||
if sess != nil {
|
||||
this.Data["InSession"] = 1 // for login bar in header.tpl
|
||||
m := sess.(map[string]interface{})
|
||||
this.Data["First"] = m["first"]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The template parameters, such as ‘.Sidebar’, correspond to the keys used in the LayoutSections map. ‘.LayoutContent’ is a special, implicit template parameter. We’ll get to the GetSession stuff further below.
|
||||
|
||||
Of course, we need to create the various template files (such as footer.tpl) in the ‘views’ directory. From index.html, we can carve out the header section for header.tpl:
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
|
||||
<title>StarDust by Free Css Templates</title>
|
||||
<meta name="keywords" content="" />
|
||||
<meta name="description" content="" />
|
||||
<link href="/static/css/default.css" rel="stylesheet" type="text/css" />
|
||||
</head>
|
||||
|
||||
```
|
||||
```
|
||||
<body>
|
||||
<!-- start header -->
|
||||
<div id="header-bg">
|
||||
<div id="header">
|
||||
<div align="right">{{if .InSession}}
|
||||
Welcome, {{.First}} [<a href="http://localhost:8080/logout">Logout</a>|<a href="http://localhost:8080/profile">Profile</a>]
|
||||
{{else}}
|
||||
[<a href="http://localhost:8080/login/home">Login</a>]
|
||||
{{end}}
|
||||
</div>
|
||||
<div id="logo">
|
||||
<h1><a href="#">StarDust<sup></sup></a></h1>
|
||||
<h2>Designed by FreeCSSTemplates</h2>
|
||||
</div>
|
||||
<div id="menu">
|
||||
<ul>
|
||||
<li class="active"><a href="http://localhost:8080/home">home</a></li>
|
||||
<li><a href="#">photos</a></li>
|
||||
<li><a href="#">about</a></li>
|
||||
<li><a href="#">links</a></li>
|
||||
<li><a href="#">contact </a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- end header -->
|
||||
<!-- start page -->
|
||||
<div id="page">
|
||||
|
||||
```
|
||||
|
||||
I leave it as an exercise for you to carve out the sections for sidebar.tpl and footer.tpl.
|
||||
|
||||
Note the lines in bold. I added them to facilitate a “login bar” at the top of every webpage. Once you’ve logged into the application, you will see the bar as so:
|
||||
|
||||
![][17]
|
||||
|
||||
This login bar works in conjunction with the GetSession code snippet we saw in activeContent(). The logic is, if the user is logged in (ie, there is a non-nil session), then we set the InSession parameter to a value (any value), which tells the templating engine to use the “Welcome” bar instead of “Login”. We also extract the user’s first name from the session so that we can present the friendly affectation “Welcome, Richard”.
|
||||
|
||||
The home page, represented by index.tpl, uses the following snippet from index.html:
|
||||
```
|
||||
<!-- start content -->
|
||||
<div id="content">
|
||||
<div class="post">
|
||||
<h1 class="title">Welcome to StarDust</h1>
|
||||
// to save space, I won't enter the remainder
|
||||
// of the snippet
|
||||
</div>
|
||||
<!-- end content -->
|
||||
|
||||
```
|
||||
|
||||
#### Special Note
|
||||
|
||||
The template files for the user module reside in the ‘user’ directory within ‘views’, just to keep things tidy. So, for example, the call to activeContent() for login is:
|
||||
```
|
||||
this.activeContent("user/login")
|
||||
|
||||
```
|
||||
|
||||
### Controller
|
||||
|
||||
A controller handles requests by handing them off to the appropriate function or ‘method’. We only have one controller for our application and it’s defined in default.go. The default method Get() for handling a GET operation is associated with our home page:
|
||||
```
|
||||
func (this *MainController) Get() {
|
||||
this.activeContent("index")
|
||||
|
||||
```
|
||||
```
|
||||
//bin //boot //dev //etc //home //lib //lib64 //media //mnt //opt //proc //root //run //sbin //speedup //srv //sys //tmp //usr //var This page requires login
|
||||
sess := this.GetSession("acme")
|
||||
if sess == nil {
|
||||
this.Redirect("/user/login/home", 302)
|
||||
return
|
||||
}
|
||||
m := sess.(map[string]interface{})
|
||||
fmt.Println("username is", m["username"])
|
||||
fmt.Println("logged in at", m["timestamp"])
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
I’ve made login a requirement for accessing this page. Logging in means creating a session, which by default expires after 3600 seconds of inactivity. A session is typically maintained on the client side by a ‘cookie’.
|
||||
|
||||
In order to support sessions in the application, the ‘SessionOn’ flag must be set to true. There are two ways to do this:
|
||||
|
||||
1. Insert ‘beego.SessionOn = true’ in the main program, main.go.
|
||||
2. Insert ‘sessionon = true’ in the configuration file, app.conf, which can be found in the ‘conf’ directory.
|
||||
|
||||
|
||||
|
||||
I chose #1. (But note that I used the configuration file to set ‘EnableAdmin’ to true: ‘enableadmin = true’. EnableAdmin allows you to use the Supervisor Module in Beego that keeps track of CPU, memory, Garbage Collector, threads, etc., via port 8088: <http://localhost:8088>.)
|
||||
|
||||
#### The Main Program
|
||||
|
||||
The main program is also where we initialize the database to be used with the ORM (Object Relational Mapping) component. ORM makes it more convenient to perform database activities within our application. The main program’s init():
|
||||
```
|
||||
func init() {
|
||||
orm.RegisterDriver("sqlite", orm.DR_Sqlite)
|
||||
orm.RegisterDataBase("default", "sqlite3", "acme.db")
|
||||
name := "default"
|
||||
force := false
|
||||
verbose := false
|
||||
err := orm.RunSyncdb(name, force, verbose)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To use SQLite, we must import ‘go-sqlite3', which can be installed with the command:
|
||||
```
|
||||
$ go get github.com/mattn/go-sqlite3
|
||||
|
||||
```
|
||||
|
||||
As you can see in the code snippet, the SQLite driver must be registered and ‘acme.db’ must be registered as our SQLite database.
|
||||
|
||||
Recall in models.go, there was an init() function:
|
||||
```
|
||||
func init() {
|
||||
orm.RegisterModel(new(AuthUser))
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The database model has to be registered so that the appropriate table can be generated. To ensure that this init() function is executed, you must import ‘models’ without actually using it within the main program, as follows:
|
||||
```
|
||||
import _ "acme/models"
|
||||
|
||||
```
|
||||
|
||||
RunSyncdb() is used to autogenerate the tables when you start the program. (This is very handy for creating the database tables without having to **manually** do it in the database command line utility.) If you set ‘force’ to true, it will drop any existing tables and recreate them.
|
||||
|
||||
#### The User Module
|
||||
|
||||
User.go contains all the methods for handling login, registration, profile, etc. There are several third-party packages we need to import; they provide support for email, PBKDF2, and UUID. But first we must get them into our project…
|
||||
```
|
||||
$ go get github.com/alexcesaro/mail/gomail
|
||||
$ go get github.com/twinj/uuid
|
||||
|
||||
```
|
||||
|
||||
I originally got **github.com/gokyle/pbkdf2** , but this package was pulled from Github, so you can no longer get it. I’ve incorporated this package into my source under the ‘utilities’ folder, and the import is:
|
||||
```
|
||||
import pk "acme/utilities/pbkdf2"
|
||||
|
||||
```
|
||||
|
||||
The ‘pk’ is a convenient alias so that I don’t have to type the rather unwieldy ‘pbkdf2'.
|
||||
|
||||
#### ORM
|
||||
|
||||
It’s pretty straightforward to use ORM. The basic pattern is to create an ORM object, specify the ‘default’ database, and select which ORM operation you want, eg,
|
||||
```
|
||||
o := orm.NewOrm()
|
||||
o.Using("default")
|
||||
err := o.Insert(&user) // or
|
||||
err := o.Read(&user, "Email") // or
|
||||
err := o.Update(&user) // or
|
||||
err := o.Delete(&user)
|
||||
|
||||
```
|
||||
|
||||
#### Flash
|
||||
|
||||
By the way, Beego provides a way to present notifications on your webpage through the use of ‘flash’. Basically, you create a ‘flash’ object, give it your notification message, store the flash in the controller, and then retrieve the message in the template file, eg,
|
||||
```
|
||||
flash := beego.NewFlash()
|
||||
flash.Error("You've goofed!") // or
|
||||
flash.Notice("Well done!")
|
||||
flash.Store(&this.Controller)
|
||||
|
||||
```
|
||||
|
||||
And in your template file, reference the Error flash with:
|
||||
```
|
||||
{{if .flash.error}}
|
||||
<h3>{{.flash.error}}</h3>
|
||||
|
||||
{{end}}
|
||||
|
||||
```
|
||||
|
||||
#### Form Validation
|
||||
|
||||
Once the user posts a request (by pressing the Submit button, for example), our handler must extract and validate the form input. So, first, check that we have a POST operation:
|
||||
```
|
||||
if this.Ctx.Input.Method() == "POST" {
|
||||
|
||||
```
|
||||
|
||||
Let’s get a form element, say, email:
|
||||
```
|
||||
email := this.GetString("email")
|
||||
|
||||
```
|
||||
|
||||
The string “email” is the same as in the HTML form:
|
||||
```
|
||||
<input name="email" type="text" />
|
||||
|
||||
```
|
||||
|
||||
To validate it, we create a validation object, specify the type of validation, and then check to see if there are any errors:
|
||||
```
|
||||
valid := validation.Validation{}
|
||||
valid.Email(email, "email") // must be a proper email address
|
||||
if valid.HasErrors() {
|
||||
for _, err := range valid.Errors {
|
||||
|
||||
```
|
||||
|
||||
What you do with the errors is up to you. I like to present all of them at once to the user, so as I go through the range of valid.Errors, I add them to a map of errors that will eventually be used in the template file. Hence, the full snippet:
|
||||
```
|
||||
if this.Ctx.Input.Method() == "POST" {
|
||||
email := this.GetString("email")
|
||||
password := this.GetString("password")
|
||||
valid := validation.Validation{}
|
||||
valid.Email(email, "email")
|
||||
valid.Required(password, "password")
|
||||
if valid.HasErrors() {
|
||||
errormap := []string{}
|
||||
for _, err := range valid.Errors {
|
||||
errormap = append(errormap, "Validation failed on "+err.Key+": "+err.Message+"\n")
|
||||
}
|
||||
this.Data["Errors"] = errormap
|
||||
return
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### The User Management Methods
|
||||
|
||||
We’ve looked at the major pieces of the controller. Now, we get to the meat of the application, the user management methods:
|
||||
|
||||
* Login()
|
||||
* Logout()
|
||||
* Register()
|
||||
* Verify()
|
||||
* Profile()
|
||||
* Remove()
|
||||
|
||||
|
||||
|
||||
Recall that we saw references to these functions in the router. The router associates each URL (and HTTP request) with the corresponding controller method.
|
||||
|
||||
#### Login()
|
||||
|
||||
Let’s look at the pseudocode for this method:
|
||||
```
|
||||
if the HTTP request is "POST" then
|
||||
Validate the form (extract the email address and password).
|
||||
Read the password hash from the database, keying on email.
|
||||
Compare the submitted password with the one on record.
|
||||
Create a session for this user.
|
||||
endif
|
||||
|
||||
```
|
||||
|
||||
In order to compare passwords, we need to give pk.MatchPassword() a variable with members ‘Hash’ and ‘Salt’ that are **byte slices**. Hence,
|
||||
```
|
||||
var x pk.PasswordHash
|
||||
|
||||
```
|
||||
```
|
||||
x.Hash = make([]byte, 32)
|
||||
x.Salt = make([]byte, 16)
|
||||
// after x has the password from the database, then...
|
||||
|
||||
```
|
||||
```
|
||||
if !pk.MatchPassword(password, &x) {
|
||||
flash.Error("Bad password")
|
||||
flash.Store(&this.Controller)
|
||||
return
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Creating a session is trivial, but we want to store some useful information in the session, as well. So we make a map and store first name, email address, and the time of login:
|
||||
```
|
||||
m := make(map[string]interface{})
|
||||
m["first"] = user.First
|
||||
m["username"] = email
|
||||
m["timestamp"] = time.Now()
|
||||
this.SetSession("acme", m)
|
||||
this.Redirect("/"+back, 302) // go to previous page after login
|
||||
|
||||
```
|
||||
|
||||
Incidentally, the name “acme” passed to SetSession is completely arbitrary; you just need to reference the same name to get the same session.
|
||||
|
||||
#### Logout()
|
||||
|
||||
This one is trivially easy. We delete the session and redirect to the home page.
|
||||
|
||||
#### Register()
|
||||
```
|
||||
if the HTTP request is "POST" then
|
||||
Validate the form.
|
||||
Create the password hash for the submitted password.
|
||||
Prepare new user record.
|
||||
Convert the password hash to hexadecimal string.
|
||||
Generate a UUID and insert the user into database.
|
||||
Send a verification email.
|
||||
Flash a message on the notification page.
|
||||
endif
|
||||
|
||||
```
|
||||
|
||||
To send a verification email to the user, we use **gomail** …
|
||||
```
|
||||
link := "http://localhost:8080/user/verify/" + u // u is UUID
|
||||
host := "smtp.gmail.com"
|
||||
port := 587
|
||||
msg := gomail.NewMessage()
|
||||
msg.SetAddressHeader("From", "acmecorp@gmail.com", "ACME Corporation")
|
||||
msg.SetHeader("To", email)
|
||||
msg.SetHeader("Subject", "Account Verification for ACME Corporation")
|
||||
msg.SetBody("text/html", "To verify your account, please click on the link: <a href=\""+link+"\">"+link+"</a><br><br>Best Regards,<br>ACME Corporation")
|
||||
m := gomail.NewMailer(host, "youraccount@gmail.com", "YourPassword", port)
|
||||
if err := m.Send(msg); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
I chose Gmail as my email relay (you will need to open your own account). Note that Gmail ignores the “From” address (in our case, “[acmecorp@gmail.com][18]”) because Gmail does not permit you to alter the sender address in order to prevent phishing.
|
||||
|
||||
#### Notice()
|
||||
|
||||
This special router method is for displaying a flash message on a notification page. It’s not really a user module function; it’s general enough that you can use it in many other places.
|
||||
|
||||
#### Profile()
|
||||
|
||||
We’ve already discussed all the pieces in this function. The pseudocode is:
|
||||
```
|
||||
Login required; check for a session.
|
||||
Get user record from database, keyed on email (or username).
|
||||
if the HTTP request is "POST" then
|
||||
Validate the form.
|
||||
if there is a new password then
|
||||
Validate the new password.
|
||||
Create the password hash for the new password.
|
||||
Convert the password hash to hexadecimal string.
|
||||
endif
|
||||
Compare submitted current password with the one on record.
|
||||
Update the user record.
|
||||
- update the username stored in session
|
||||
endif
|
||||
|
||||
```
|
||||
|
||||
#### Verify()
|
||||
|
||||
The verification email contains a link which, when clicked by the recipient, causes Verify() to process the UUID. Verify() attempts to read the user record, keyed on the UUID or registration key, and if it’s found, then the registration key is removed from the database.
|
||||
|
||||
#### Remove()
|
||||
|
||||
Remove() is pretty much like Login(), except that instead of creating a session, you delete the user record from the database.
|
||||
|
||||
### Exercise
|
||||
|
||||
I left out one user management method: What if the user has forgotten his password? We should provide a way to reset the password. I leave this as an exercise for you. All the pieces you need are in this tutorial. (Hint: You’ll need to do it in a way similar to Registration verification. You should add a new Reset_key to the AuthUser table. And make sure the user email address exists in the database before you send the Reset email!)
|
||||
|
||||
[Okay, so I’ll give you the [exercise solution][19]. I’m not cruel.]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Let’s review what we’ve learned. We covered the mapping of URLs to request handlers in the router. We showed how to incorporate a CSS template design into our views. We discussed the ORM package, and how it’s used to perform database operations. We examined a number of third-party utilities useful in writing our application. The end result is a component useful in many scenarios.
|
||||
|
||||
This is a great deal of material in a tutorial, but I believe it’s the best way to get started in writing a practical application.
|
||||
|
||||
[For further material, look at the [sequel][20] to this article, as well as the [final edition][21].]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/@richardeng/a-word-from-the-beegoist-d562ff8589d7
|
||||
|
||||
作者:[Richard Kenneth Eng][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@richardeng?source=post_header_lockup
|
||||
[1]:http://tour.golang.org/
|
||||
[2]:http://golang.org/
|
||||
[3]:http://beego.me/
|
||||
[4]:https://medium.com/@richardeng/in-the-beginning-61c7e63a3ea6
|
||||
[5]:http://www.mysql.com/
|
||||
[6]:http://www.sqlite.org/
|
||||
[7]:https://code.google.com/p/liteide/
|
||||
[8]:http://macromates.com/
|
||||
[9]:http://notepad-plus-plus.org/
|
||||
[10]:https://medium.com/@richardeng/back-to-the-future-9db24d6bcee1
|
||||
[11]:http://en.wikipedia.org/wiki/Acme_Corporation
|
||||
[12]:https://github.com/horrido/acme
|
||||
[13]:http://en.wikipedia.org/wiki/Regular_expression
|
||||
[14]:http://en.wikipedia.org/wiki/PBKDF2
|
||||
[15]:http://en.wikipedia.org/wiki/Universally_unique_identifier
|
||||
[16]:http://www.freewebtemplates.com/download/free-website-template/stardust-141989295/
|
||||
[17]:https://cdn-images-1.medium.com/max/1600/1*1OpYy1ISYGUaBy0U_RJ75w.png
|
||||
[18]:mailto:acmecorp@gmail.com
|
||||
[19]:https://github.com/horrido/acme-exercise
|
||||
[20]:https://medium.com/@richardeng/a-word-from-the-beegoist-ii-9561351698eb
|
||||
[21]:https://medium.com/@richardeng/a-word-from-the-beegoist-iii-dbd6308b2594
|
||||
[22]: http://golang.org/
|
||||
[23]: http://beego.me/
|
||||
[24]: http://revel.github.io/
|
||||
[25]: http://www.web2py.com/
|
||||
[26]: https://medium.com/@richardeng/the-zen-of-web2py-ede59769d084
|
||||
[27]: http://www.seaside.st/
|
||||
[28]: http://en.wikipedia.org/wiki/Object-relational_mapping
|
@ -1,4 +1,3 @@
|
||||
Translating by name1e5s
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (WangYueScream )
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
Loading…
Reference in New Issue
Block a user