This commit is contained in:
jabirus 2014-10-21 22:37:42 +08:00
commit e452497182
6 changed files with 334 additions and 37 deletions

View File

@ -1,37 +0,0 @@
Debian 7.7 Is Out with Security Fixes
================================================================================
**The Debian project project has announced that Debian 7.7 "Wheezy" is now out and available for download. This is the regular maintenance update, but it packs quite a few important fixes.**
![](http://i1-news.softpedia-static.com/images/news2/Debian-7-7-Is-Out-with-Security-Fixes-462647-2.jpg)
Debian gets regular major updates for the distribution, but if you already have it installed and you keep it up to date you won't need to do anything extra. The developers have implemented a few important fixes, so it's recommended to upgrade as soon as possible.
"This update mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available."
"Please note that this update does not constitute a new version of Debian 7 but only updates some of the packages included. There is no need to throw away old wheezy CDs or DVDs but only to update via an up-to-date Debian mirror after an installation, to cause any out of date packages to be updated," noted the developers in the official [announcement][1].
The devs have upgrade the Bash package, the closed some important exploits, the SSH login at boot no longer works, and a few other tweaks have been made.
Check out the complete changelog in the official announcement for more details about the release.
Download Debian 7.7 right now:
- [Debian GNU/Linux 7.7.0 (ISO) 32-bit/64-bit][2]
- [Debian GNU/Linux 6.0.10 (ISO) 32-bit/64-bit][3]
- [Debian GNU/Linux 8 Beta 2 (ISO) 32-bit/64-bit][4]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Debian-7-7-Is-Out-with-Security-Fixes-462647.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://www.debian.org/News/2014/20141018
[2]:http://ftp.acc.umu.se/debian-cd/7.7.0/multi-arch/iso-dvd/debian-7.7.0-i386-amd64-source-DVD-1.iso
[3]:http://ftp.au.debian.org/debian/dists/oldstable/
[4]:http://cdimage.debian.org/cdimage/jessie_di_beta_2/

View File

@ -0,0 +1,64 @@
Microsoft loves Linux -- for Azure's sake
================================================================================
![](http://images.techhive.com/images/article/2014/10/microsoft_guthrie_azure-100525983-primary.idge.jpg)
Scott Guthrie, executive vice president, Microsoft Cloud and Enterprise group, shows how Microsoft differentiates Azure. Credit: James Niccolai/IDG News Service
### Microsoft adds CoreOS and Cloudera to its growing set of Azure services ###
Microsoft now loves Linux.
This was the message from Microsoft CEO Satya Nadella, standing in front of an image that read "Microsoft [heart symbol] Linux," during a Monday webcast to announce a number of services it had added to its Azure cloud, including the Cloudera Hadoop package and the CoreOS Linux distribution.
In addition, the company launched a marketplace portal, now in preview mode, designed to make it easier for customers to procure and manage their cloud operations.
Microsoft is also planning to release an Azure appliance, in conjunction with Dell, that will allow organizations to run hybrid clouds where they can easily move operations between Microsoft's Azure cloud and their own in-house version.
The declaration of affection for Linux indicates a growing acceptance of software that wasn't created at Microsoft, at least for the sake of making its Azure cloud platform as comprehensive as possible.
For decades, the company tied most of its new products and innovations to its Windows platform, and saw other OSes, such as Linux, as a competitive threat. Former CEO Steve Ballmer [once infamously called Linux a cancer][1].
This animosity may be evaporating as Microsoft is finding that customers want cloud services that incorporate software from other sources in addition to Microsoft. About 20 percent of the workloads run on Azure are based on Linux, Nadella admitted.
Now, the company considers its newest competitors to be the cloud services offered by Amazon and Google.
Nadella said that by early 2015, Azure will be operational in 19 regions around the world, which will provide more local coverage than either Google or Amazon.
He also noted that the company is investing more than $4.5 billion in data centers, which by Microsoft's estimation is twice as much as Amazon's investments and six times as much as Google's.
To compete, Microsoft has been adding widely-used third party software packages to Azure at a rapid clip. Nadella noted that Azure now supports all the major data integration stacks, such as those from Oracle and IBM, as well as major new entrants such as MongoDB and Hadoop.
The results seem to be paying off. Today Azure is generating about $4.48 billion in annual revenue for Microsoft, and we are "still at the early days," of cloud computing, Nadella said.
The service attracts about 10,000 new customers per week. About 2 million developers have signed on to Visual studio online since its launch. The service runs about 1.2 million SQL databases.
CoreOS is now actually the fifth Linux distribution that Azure offers, joining Ubuntu, CentOS, OpenSuse, and Oracle Linux (a variant of Red Hat Enterprise Linux). Customers [can also package their own Linux distributions][2] to run in Azure.
CoreOS was developed as [a lightweight Linux distribution][3] to be used primarily in cloud environments. Officially launched in December, CoreOS is already offered as a service by Google, Rackspace, DigitalOcean and others.
Cloudera is the second Hadoop distribution offered on Azure, following Hortonworks. Cloudera CEO Mike Olson joined the Microsoft executives onstage to demonstrate how easily one can use the Cloudera Hadoop software within Azure.
Using the new portal, Olson showed how to start up a 90-node instance of Cloudera with a few clicks. Such a deployment can be connected to an Excel spreadsheet, where the user can query the dataset using natural language.
Microsoft also announced a number of other services and products.
Azure will have a new type of virtual machine, which is being called the "G Family." These virtual machines can have up to 32 CPU cores, 450GB of working memory and 6.5TB of storage, making it in effect "the largest virtual machine in the cloud," said Scott Guthrie, who is the Microsoft executive vice president overseeing Azure.
This family of virtual machines is equipped to handle the much larger workloads Microsoft is anticipating its customers will want to run. It has also upped the amount of storage each virtual machine can access, to 32TB.
The new cloud platform appliance, available in November, will allow customers to run Azure services on-premise, which can provide a way to bridge their on-premise and cloud operations. One early customer, integrator General Dynamics, plans to use this technology to help its U.S. government customers migrate to the cloud.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2836315/microsoft-loves-linux-for-azures-sake.html
作者:[Joab Jackson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Joab-Jackson/
[1]:http://www.theregister.co.uk/2001/06/02/ballmer_linux_is_a_cancer/
[2]:http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-create-upload-vhd/
[3]:http://www.itworld.com/article/2696116/open-source-tools/coreos-linux-does-away-with-the-upgrade-cycle.html

View File

@ -0,0 +1,30 @@
Red Hat acquires FeedHenry to get mobile app chops
================================================================================
Red Hat wants a piece of the enterprise mobile app market, so it has acquired Irish company FeedHenry for approximately $82 million.
The growing popularity of mobile devices has put pressure on enterprise IT departments to make existing apps available from smartphones and tablets -- a trend that Red Hat is getting in on with the FeedHenry acquisition.
The mobile app segment is one of the fastest growing in the enterprise software market, and organizations are looking for better tools to build mobile applications that extend and enhance traditional enterprise applications, according to Red Hat.
"Mobile computing for the enterprise is different than Angry Birds. Enterprise mobile applications need a backend platform that enables the mobile user to access data, build backend logic, and access corporate APIs, all in a scalable, secure manner," Craig Muzilla, senior vice president for Red Hat's Application Platform Business, said in a [blog post][1].
FeedHenry provides a cloud-based platform that lets users develop and deploy applications for mobile devices that meet those demands. Developers can create native apps for Android, iOS, Windows Phone and BlackBerry as well as HTML5 apps, or a mixture of native and Web apps.
A key building block is Node.js, an increasingly popular platform based on Chrome's JavaScript runtime for building fast and scalable applications.
From Red Hat's point of view, FeedHenry is a natural fit with the company's strengths in enterprise middleware and PaaS (platform-as-a-service). It adds better mobile capabilities to the JBoss Middleware portfolio and OpenShift PaaS offerings, Red Hat said.
Red Hat plans to continue to sell and support FeedHenry's products, and will continue to honor client contracts. For the most part, it will be business as usual, according to Red Hat. The transaction is expected to close in the third quarter of its fiscal 2015.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2685286/red-hat-acquires-feedhenry-to-get-mobile-app-chops.html
作者:[Mikael Ricknäs][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Mikael-Rickn%C3%A4s/
[1]:http://www.redhat.com/en/about/blog/its-time-go-mobile

View File

@ -0,0 +1,119 @@
Interview: Thomas Voß of Mir
================================================================================
**Mir was big during the space race and its a big part of Canonicals unification strategy. We talk to one of its chief architects at mission control.**
Not since the days of 2004, when X.org split from XFree86, have we seen such exciting developments in the normally prosaic realms of display servers. These are the bits that run behind your desktop, making sure Gnome, KDE, Xfce and the rest can talk to your graphics hardware, your screen and even your keyboard and mouse. They have a profound effect on your systems performance and capabilities. And where we once had one, we now have two more Wayland and Mir, and both are competing to win your affections in the battle for an X replacement.
We spoke to Waylands Daniel Stone in issue 6 of Linux Voice, so we thought it was only fair to give equal coverage to Mir, Canonicals own in-house X replacement, and a project that has so far courted controversy with some of its decisions. Which is why we headed to Frankfurt and asked its Technical Architect, Thomas Voß, for some background context…
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_1.jpg)
**Linux Voice: Lets go right back to the beginning, and look at what X was originally designed for. X solved the problems that were present 30 years ago, where people had entirely different needs, right?**
**Thomas Voß**: It was mainframes. It was very expensive mainframe computers with very cheap terminals, trying to keep the price as low as possible. And one of the first and foremost goals was: “Hey, I want to be able to distribute my UI across the network, ideally compressed and using as little data as possible”. So a lot of the decisions in X were motivated by that.
A lot of the graphics languages that X supports even today have been motivated by that decision. The X developers started off in a 2D world; everything was a 2D graphics language, the X way of drawing rectangles. And its present today. So X is not necessarily bad in that respect; it still solves a lot of use cases, but its grown over time.
One of the reasons is that X is a protocol, in essence. So a lot of things got added to the protocol. The problem with adding things to a protocol is that they tend to stick. To use a 2D graphics language as an example, XVideo is something that no-one really likes today. Its difficult to support and the GPU vendors actually cry out in pain when you start talking about XVideo. Its somewhat bloated, and its just old. Its an old proven technology and Im all for that. I actually like X for a lot of things, and it was a good source of inspiration. But then when you look at your current use cases and the current setup we are in, where convergence is one of the buzzwords massively overrated obviously but at the heart of convergence lies the fact that you want to scale across different form factors.
**LV: And convergence is big for Canonical isnt it?**
**Thomas**: Its big, I think, for everyone, especially over time. But convergence is a use case that was always of interest to us. So we always had this idea that we want one codebase. We dont want a situation like Apple has with OS X and iOS, which are two different codebases. We basically said “Look, whatever we want to do, we want to do it from one codebase, because its more efficient.” We dont want to end up in the situation where we have to be maintaining two, three or four separate codebases.
Thats where we were coming from when we were looking at X, and it was just too bloated. And we looked at a lot of alternatives. We started looking at how Mac OS X was doing things. We obviously didnt have access to the source code, but if you see the transition from OS 9 to OS X, it was as if they entirely switched to one graphics language. It was pre-PostScript at that time. But they chose one graphics language, and thats it. From that point on, when you choose a graphics language, things suddenly become more simple to do. Todays graphics language is EGL ES, so there was inspiration for us to say we were converged on GL and EGL. From our perspective, thats the least common denominator.
> We basically said: whatever we want to do, we want to do it from one codebase, because its more efficient.
Obviously there are disadvantages to having only one graphics language, but the benefits outweigh the disadvantages. And I think thats a common theme in the industry. Android made the same decision to go that way. Even Wayland to a certain degree has been doing that. They have to support EGL and GL, simply because its very convenient for app developers and toolkit developers an open graphics language. That was the part that inspired us, and we wanted to have this one graphics language and support it well. And that takes a lot of craft.
So, once you can say: no more weird 2D API, no more weird phong API, and everything is mapped out to GL, youre way better off. And you can distill down the scope of the overall project to something more manageable. So it went from being impossible to possible. And then there was me, being very opinionated. I dont believe in extensibility from the beginning traditionally in Linux everything is super extensible, which has got benefits for a certain audience.
If you think about the audience of the display server, its one of the few places in the system where youve got three audiences. So youve got the users, who dont care, or shouldnt care, about the display server.
**LV: Its transparent to them.**
**Thomas**: Yes, its pixels, right? Thats all they care about. It should be smooth. It should be super nice to use. But the display server is not their main concern. It obviously feeds into a user experience, quite significantly, but there are a lot of other parts in the system that are important as well.
Then youve got developers who care about the display server in terms of the API. Obviously we said we want to satisfy this audience, and we want to provide a super-fast experience for users. It should be rock solid and stable. People have been making fun of us and saying “yeah, every project wants to be rock solid and stable”. Cool so many fail in doing that, so lets get that down and just write out what we really want to achieve.
And then youve got developers, and the moment you expose an API to them, or a protocol, you sign a contract with them, essentially. So they develop to your API well, many app developers wont directly because theyll be using toolkits but at some point youve got developers who sign up to your API.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_3.jpg)
**LV: The developers writing the toolkits, then?**
**Thomas**: We do a lot of work in that arena, but in general its a contract that we have with normal app developers. And we said: look, we dont want the API or contract to be super extensible and trying to satisfy every need out there. We want to understand what people really want to do, and we want to commit to one API and contract. Not five different variants of the contract, but we want to say: look, this is what we support and we, as Canonical and as the Mir maintainers, will sign up to.
So I think thats a very good thing. You can buy into specific shells sitting on top of Mir, but you can always assume a certain base level of functionality that we will always provide in terms of window management, in terms of rendering capabilities, and so on and so forth. And funnily enough, that also helps with convergence. Because once you start thinking about the API as very important, you really start thinking about convergence. And what happens if we think about form factor and we transfer from a phone to a tablet to a desktop to a fridge?
**LV: And whatever might come!**
**Thomas**: Right, right. How do we account for future developments? And we said we dont feel comfortable making Mir super extensible, because it will just grow. Either it will just grow and grow, or you will end up with an organisation that just maintains your protocol and protocol extensions.
**LV: So thats looking at Mir in relation to X. The obvious question is comparing Mir to Wayland so what is it that Mir does, that Wayland doesnt?**
**Thomas**: This might sound picky, but we have to distinguish what Wayland really is. Wayland is a protocol specification which is interesting because the value proposition is somewhat difficult. Youve got a protocol and youve got a reference implementation. Specifically, when we started, Weston was still a test bed and everything being developed ended up in there.
No one was buying into that; no one was saying, “Look, were moving this to production-level quality with a bona fide protocol layer that is frozen and stable for a specific version that caters to application authors”. If you look at the Ubuntu repository today, or in Debian, theres Wayland-cursor-whatever, so they have extensions already. So thats a bit different from our approach to Mir, from my perspective at least.
There was this protocol that the Wayland developers finished and back then, before we did Mir and I looked into all of this, I wrote a Wayland compositor in Go, just to get to know things.
**LV: As you do!**
**Thomas**: And I said: you know, I dont think a protocol is a good way of approaching this because versioning a protocol in a packaging scenario is super difficult. But versioning a C API, or any sort of API that has a binary stability contract, is way easier and we are way more experienced at that. So, in that respect, we are different in that we are saying the protocol is an implementation detail, at least up to a certain point.
Im pretty sure for version 1.0, which we will call a golden release, we will open up the protocol for communication purposes. Under the covers its Google buffers and sockets. So well say: this is the API, work against that, and were committed to it.
Thats one thing, and then we said: OK, theres Weston, but we cannot use Weston because its not working on Android, the driver model is not well defined, and theres so much work that we would have to do to actually implement a Wayland compositor. And then we are in a situation where we would have to cut out a set of functionality from the Wayland protocol and commit to that, no matter what happens, and ultimately that would be a fork, over time, right?.
**LV: Its a difficult concept for many end users, who just want to see something working.**
**Thomas**: Right, and even from a developers perspective and lets jump to the political part I find it somewhat difficult to have a party owning a protocol definition and another party building the reference implementations. Now, Gnome and KDE do two different Wayland compositors. I dont see the benefit in that, to be quite frank, so the value proposition is difficult to my mind.
The driver model in Mir and Wayland is ultimately not that different its GL/EGL based. That is kind of the denominator that you will find in both things, which is actually a good thing, because if you look at the contract to application developers and toolkit developers, most of them dont want Mir or Wayland. They talk ELG and GL, and at that point, its not that much of a problem to support both.
> If there had been a full reference implementation of Wayland, our decision might have been different.
So we did this work for porting the Chromium browser to Mir. We actually took the Chromium Wayland back-end, factored out all the common pieces to EGL and GL ES, and split it up into Wayland and Mir.
And I think from a users or application developers perspective, the difference is not there. I think, in retrospect, if there would have been something like a full reference implementation of Wayland, where a company had signed up to provide something that is working, and committed to a certain protocol version, our decision might have been different. But there just wasnt. It was five years out there, Wayland, Wayland, Wayland, and there was nothing that we could build upon.
**LV: The main experience weve had is with RebeccaBlackOS, which has Weston and Wayland, because, like you say, theres no that much out there running it.**
**Thomas**: Right. I find Wayland impressive, obviously, but I think Mir will be significantly more relevant than Wayland in two years time. We just keep on bootstrapping everything, and weve got things working across multiple platforms. Are there issues, and are there open questions to solve? Most likely. We never said we would come up with the perfect solution in version 1. That was not our goal. I dont think software should be built that way. So it just should be iterated.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_2.jpg)
**LV: When was Mir originally planned for? Which Ubuntu release? Because it has been pushed back a couple of times.**
**Thomas**: Well, we originally planned to have it by 14.04. That was the kind of stretch goal, because it highly depends on the availability of proprietary graphics drivers. So you cant ship an LTS [Long Term Support] release of Ubuntu on a new display server without supporting the hardware of the big guys.
**LV: We thought that would be quite ambitious anyway a Long Term Support release with a whole new display server!**
**Thomas**: Yes, it was ambitious but for a reason. If you dont set a stretch goal, and probably fail in reaching it, and then re-evaluate how you move forward, its difficult to drive a project. So if you just keep it evolving and evolving and evolving, and you dont have a checkpoint at some point…
**LV: Thats like a lot of open source projects. Inkscape is still on 0.48 or something, and it works, its reliable, but they never get to 1.0. Because they always say: “Oh lets add this feature, and that feature”, and the rest of us are left thinking: just release 1.0 already!.**
**Thomas**: And I wouldnt actually tie it to a version number. To me, that is secondary. To me, the question is whether we call this ready for broad public consumption on all of the hardware versions we want to support?
In Canonical, as a company, we have OEM contracts and we are enabling Ubuntu on a host of devices, and laptops and whatever, so we have to deliver on those contracts. And the question is, can we do that? No. Well, you never like a no.
> The question is whether we call this ready for broad public consumption on the hardware we want to support.
Usually, when you encounter a problem and you tackle it, and you start thinking how to solve the problem, thats more beneficial than never hearing a no. Thats kind of what we were aiming for. Ubuntu 14.04 was a stretch goal everyone was aware of that and we didnt reach it. Fine, cool. Lets go on.
So how do we stage ourself for the next cycle, until an LTS? Now we have this initiative where we have a daily testable image with Unity 8 and Mir. Its not super usable because its just essentially the tethered UI that you are seeing there, but still its something that we didnt have a year ago. And for me, thats a huge gain.
And ultimately, before we can ship something, before any new display server can ship in an LTS release, you need to have buy-in from the GPU vendors. Thats what you need.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-thomas-vos-of-mir/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -0,0 +1,84 @@
Configuring layer-two peer-to-peer VPN using n2n
================================================================================
n2n is a layer-two peer-to-peer virtual private network (VPN) which allows users to exploit features typical of P2P applications at network instead of application level. This means that users can gain native IP visibility (e.g. two PCs belonging to the same n2n network can ping each other) and be reachable with the same network IP address regardless of the network where they currently belong. In a nutshell, as OpenVPN moved SSL from application (e.g. used to implement the https protocol) to network protocol, n2n moves P2P from application to network level.
### n2n main features ###
An n2n is an encrypted layer two private network based on a P2P protocol.
Encryption is performed on edge nodes using open protocols with user-defined encryption keys: you control your security without delegating it to companies as it happens with Skype or Hamachi.
Each n2n user can simultaneously belong to multiple networks (a.k.a. communities).
Ability to cross NAT and firewalls in the reverse traffic direction (i.e. from outside to inside) so that n2n nodes are reachable even if running on a private network. Firewalls no longer are an obstacle to direct communications at IP level.
n2n networks are not meant to be self-contained, but it is possible to route traffic across n2n and non-n2n networks.
### The n2n architecture is based on two components ###
**Supernode**: it is used by edge nodes at startup or for reaching nodes behind symmetrical firewalls. This application is basically a directory register and a packet router for those nodes that cannot talk directly.
**Edge nodes**: applications installed on user PCs that allow the n2n network to be build. Practically each edge node creates a tun/tap device that is then the entry point to the n2n network.
### Install n2n on Ubuntu ###
Open the terminal and run the following commands
$ sudo apt-get install subversion build-essential libssl-dev
$ svn co https://svn.ntop.org/svn/ntop/trunk/n2n
$ cd n2n/n2n_v2
$ make
$ sudo make install
### Configure a P2P VPN with n2n ###
First we need to configure one super node and any number of edge nodes
Decide where to place your supernode. Suppose you put it on host a.b.c.d at port xyw.
Decide what encryption password you want to use to secure your data. Suppose you use the password encryptme
Decide the network name you want to use. Suppose you call it mynetwork. Note that you can use your supernode/edge nodes to handle multiple networks, not just one.
Decide what IP address you plan to use on your edge nodes. Suppose you use IP address 10.1.2.0/24
Start your applications:
### Configure Super node ###
supernode -l xyw
### Configure Edge Nodes ###
On each edge node, use the following command to connect to a P2P VPN.
sudo edge -a 10.1.2.1 -c mynetwork -k encryptme -l a.b.c.d:xyw
sudo edge -a 10.1.2.2 -c mynetwork -k encryptme -l a.b.c.d:xyw
### Now test your n2n network ###
edge node1> ping 10.1.2.2
edge node2> ping 10.1.2.1
Windows n2n VPN Client (N2N Edge GUI)
You can download N2N Edge GUI from [here][1]
N2N Edge GUI is a basic installer and GUI configuration screen for the peer-to-peer n2n' VPN solution
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/client.jpg)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/configuring-layer-two-peer-to-peer-vpn-using-n2n.html
作者:[ruchi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix
[1]:http://sourceforge.net/projects/n2nedgegui/

View File

@ -0,0 +1,37 @@
Debian 7.7 发布了,带来了一些安全修复
================================================================================
** Debian项目已经宣布Debian7.7 “Wheezy”发布并提供下载。这是常规维护更新但它打包了很多重要的更新。**
![](http://i1-news.softpedia-static.com/images/news2/Debian-7-7-Is-Out-with-Security-Fixes-462647-2.jpg)
Debian发行版可以得到常规主要的更新但如果你已经安装了它且保持最新你无需做任何额外的东西。开发者已经开发了一些重要的修复因此它建议尽快升级。
“此次更新主要给稳定版修正安全问题,以及对一些严重问题的调整。安全建议已经另外发布且在其他地方引用。”
开发者在正式[公告][1]中指出“请注意此更新不构成Debian 7的新版本只会更新部分包没必要扔掉旧的wheezy CD或DVD只要在安装后通过Debian镜像升级来升级那些过期的包就行“。
开发着已经升级了Bash包来修复一些重要的漏洞在boot时登录SSH不再有效并且还做了其他一些微调。
要了解发布更多的细节请查看官方公告中的完整更新日志。
现在下载 Debian 7.7:
- [Debian GNU/Linux 7.7.0 (ISO) 32-bit/64-bit][2]
- [Debian GNU/Linux 6.0.10 (ISO) 32-bit/64-bit][3]
- [Debian GNU/Linux 8 Beta 2 (ISO) 32-bit/64-bit][4]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Debian-7-7-Is-Out-with-Security-Fixes-462647.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://www.debian.org/News/2014/20141018
[2]:http://ftp.acc.umu.se/debian-cd/7.7.0/multi-arch/iso-dvd/debian-7.7.0-i386-amd64-source-DVD-1.iso
[3]:http://ftp.au.debian.org/debian/dists/oldstable/
[4]:http://cdimage.debian.org/cdimage/jessie_di_beta_2/