选题20131012

This commit is contained in:
wxy 2013-10-12 20:27:49 +08:00
parent b4ada187fd
commit 7af1142b7e
2 changed files with 91 additions and 0 deletions

View File

@ -0,0 +1,42 @@
On Security Backdoors
=====================
I [wrote][1] Monday about revelations that the NSA might have been inserting backdoors into security standards. Today I want to talk through two cases where the NSA has been accused of backdooring standards, and use these cases to differentiate between two types of backdoors.
The first case concerns a NIST standard, [SP 800-90A][2], which specifies a type of PseudoRandom Generator (PRG). A PRG is a computation that takes a small number of random/unpredictable bits and “stretches” them to get a larger number of unpredictable bits. PRGs are essential to cryptography, serving as the source for most of the secret keys that are used. If you can “break” somebodys PRG, you can predict which secret keys they will use, thereby allowing you to defeat their crypto.
The standard gave a choice of several core algorithms to choose from. One of them uses a mathematical construct called an Elliptic Curve (EC) which I wont try to explain in this space. This algorithm uses two “public parameters” called P and Q, which are points on the EC. P and Q are public, with specific values written into the standard.
Cryptographers believed that if you picked P and Q randomly, the PRG would be secure. But in 2006 two private-sector cryptographers figured out that there is a way to pick P and Q so they have a special relationship to each other. An “outsider” wouldnt be able to tell that the special relationship existed, but if you knew the “secret key” that described the relationship between P and Q, then you could easily defeat the security of the PRG.
At this point, several facts become suddenly interesting. First, NSA people seemed very intent on including this specific algorithm in the standard despite its slow performance. Second, NSA was suggesting specific values of P and Q. Third, NSA was not explaining how those particular P and Q values had been chosen. Interesting, no?
All of this could have been addressed by having some kind of public procedure by which new, random P and Q values would be chosen. But that didnt happen.
Yesterday NIST [re-opened][3] SP 800-90A for public comment.
The second example was explained by John Gilmore. John described his observations from the IPSEC standards process. IPSEC was meant as a foundational security technology, providing crypto for confidentiality and integrity of individual IP packets on the Internet. A successful and widely deployed IPSEC would have been a game-changer for Internet security, putting lots of traffic under cryptographic protection.
John says that NSA people and their allies worked consistently to make the standard less secure, more complicated, less efficient, and harder to implement securely. He didnt see a smoking-gun attempt to introduce a backdoor, but what he describes is a consistent effort to undermine the effectiveness of the standard. And indeed, IPSEC has not had anything like the impact one might have expected.
These examples shows us two different kinds of backdoors. In the first PRG case, the NSA was accused of trying to create a backdoor that only it could use, because only it knew the secret key relating P to Q. In the second IPSEC case, the accusation was that the NSA was weakening users security against all attackers—the NSA would have easier access to your data, but so would all sorts of other people.
To be sure, even a private backdoor might not stay private. If there is a magic secret key that lets the NSA spy on everyone, that key might be misused or it might leak. So the line between an NSA-only backdoor and an open backdoor is always a bit blurry.
Still, it seems to me that the two types of backdoors call for different policy debates. Its one thing to secretly give the NSA easier access to everyones data. Its another thing to give everyone easier access. The latter is worse.
We need to look as well at how a backdoor might be created. In the PRG example, the backdoor would have required the NSA to slip a subtle cryptographic weakness past the crypto experts working on a standard. In the IPSEC example, creating the weakness would seem to require coordinated public activity in the standards body over time, and the individual steps would surely be noticed even if nobody spotted a pattern.
But one has to wonder whether these examples really were NSA attempts to undermine security, or whether theyre just false alarms. We cant be sure. As long as the NSA has a license to undermine security standards, well have to be suspicious of any standard in which they participate.
---
via: https://freedom-to-tinker.com/blog/felten/on-security-backdoors/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
[1]:https://freedom-to-tinker.com/blog/felten/nsa-apparently-undermining-standards-security-confidence/
[2]:http://csrc.nist.gov/publications/drafts/800-90/draft_sp800_90a_rev1.pdf
[3]:http://www.nist.gov/director/cybersecuritystatement-091013.cfm

View File

@ -0,0 +1,49 @@
The Debian OpenSSL Bug: Backdoor or Security Accident?
======================================================
On Monday, Ed wrote about [Software Transparency][1], the idea that software is more resistant to intentional backdoors (and unintentional security vulnerabilities) if the process used to create it is transparent. Elements of software transparency include the availability of source code and the ability to read or contribute to a projects issue tracker or internal developer discussion. He mentioned a case that I want to discuss in detail: in 2008, the Debian Project (a popular Linux distribution used for many web servers) [announced][2] that the pseudorandom number generator in Debians version of [OpenSSL][3] was broken and insecure.
First, some background: A pseudorandom number generator (PRNG) is a program F that, given a short random seed s, gives you a long stream of bits F(s) which appear to be random. If you and I put in the same seed s, well get the same stream of bits. But if I choose s at random and dont tell you what it is, you cant predict F(s) at all—as far as youre concerned, F(s) might as well be random. The OpenSSL PRNG tries to grab some unpredictable information (“entropy”) from the system, such as the current process ID, the contents of some memory that are likely to be different (for example, uninitialized memory which is or might be controlled by other processes) and so on, and turns these into the seed s. Then it gives back the random stream F(s).
In 2006, in order to [fix warnings][4] spit out by [a tool][5] that can help find memory access bugs in software, one of the Debian maintainers [decided to comment][6] [out two lines of code][7] in the OpenSSL PRNG. It turns out that these lines were important: they were responsible for grabbing almost all of the unpredictable entropy that became the seed for the OpenSSL PRNG. Without them, the PRNG only had 32,767 choices for s, so there were only that many possible choices for F(s).
And so programs that relied on the OpenSSL random number generator werent seeing nearly as much randomness as they thought they were. One such program generates the cryptographic keys used for SSL (secure web browsing) and SSH (secure remote login). Critically, these keys have to be random: if you can guess what my secret key is, you can break into anything I protect using that key. That means you have the ability to read encrypted traffic, [log into remote servers[8], [or to make forged messages appear authentic][9]. Because the vulnerability had first been introduced in late 2006, the bug also [made its way into Ubuntu][10] (another popular Linux distribution widely used for web servers). All told, the bug affected thousands of servers and [persisted for a long time][11] because patching the affected servers was not enough to fix the problem—you also had to replace any predictable weak keys you had made while the vulnerability was present.
As an aside, the problem of finding entropy to feed pseudorandom number generators is [famously][12] [hard][13]. Indeed, its still a [big challenge][14] to get right even today. Errors in randomness are hard to detect, because if you just eyeball the output, it will look random-ish and will change each time you run the program. Weak randomness can be very hard to spot, but it can render the cryptography in a (seemingly) secure system useless. Still, the Debian bug was obvious enough that it inspired a [lot of ridicule][15] [in the security community][16] once it was discovered.
So was this problem a backdoor, purposefully introduced? It seems unlikely. The maintainer who made the change, [Kurt Roeckx][17], was later [made Secretary of the Debian Project][18], suggesting that hes a real and trustworthy person and probably not a fake identity made up by the NSA to insert a vulnerability. The Debian Project is famous for requiring significant effort to reach the inner circle. And in this case, the mistake itself was not completely damning—a [cascade of failures][19] made the vulnerability possible and contributed to its severity.
But the vulnerability did happen in a transparent setting. Everything that was done was done in public. And yet the vulnerability still got introduced and wasnt noticed for a long time. Thats in part because all the transparency made for a lot of noise, so the people to whom the vulnerability would have been obvious werent paying attention. But its also because the vulnerability was subtle and the system wasnt designed to make the impact of the change obvious to a casual observer.
Does that mean that software transparency doesnt help? I dont think so—lots of people agree that transparent software is more secure than non-transparent software. But that doesnt mean failures cant still happen or that we should be less vigilant just because lots of other people can see whats going on.
At the very least, transparency lets us look back, years later, and figure out what caused the bug—in this case, engineering error and not deliberate sabotage.
---
via: https://freedom-to-tinker.com/blog/kroll/software-transparency-debian-openssl-bug/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
[1]:https://freedom-to-tinker.com/blog/felten/software-transparency/
[2]:http://www.debian.org/security/2008/dsa-1571
[3]:https://www.openssl.org/
[4]:http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=363516
[5]:http://valgrind.org/
[6]:http://marc.info/?l=openssl-dev&m=114651085826293&w=2
[7]:http://svn.debian.org/viewsvn/pkg-openssl/openssl/trunk/rand/md_rand.c?rev=141&view=diff&r1=141&r2=140&p1=openssl/trunk/rand/md_rand.c&p2=/openssl/trunk/rand/md_rand.c
[8]:http://www.exploit-db.com/exploits/5622/
[9]:http://plog.sesse.net/blog/tech/2008-05-14-17-21_some_maths.html
[10]:http://www.ubuntu.com/usn/usn-612-1/
[11]:http://cseweb.ucsd.edu/~hovav/dist/debiankey.pdf
[12]:http://xkcd.com/221/
[13]:http://dilbert.com/strips/comic/2001-10-25/
[14]:https://factorable.net/weakkeys12.extended.pdf
[15]:http://www.xkcd.com/424/
[16]:http://www.links.org/?p=327
[17]:http://www.roeckx.be/journal/
[18]:http://lists.debian.org/debian-devel-announce/2009/02/msg00009.html
[19]:http://research.swtch.com/openssl