Connection: Wiretap Laws

I’m experimenting with a new kind of post, where I simply make a connection between two or more ideas, usually with little or no commentary. Here’s the first one:

Ed Felten, yesterday: CALEA II: Risks of wiretap modifications to endpoints

Today I joined a group of twenty computer scientists in issuing a report criticizing an FBI plan to require makers of secure communication tools to redesign their systems to make wiretapping easy. We argue that the plan would endanger the security of U.S. users and the competitiveness of U.S. companies, without making it much harder for criminals to evade wiretaps.

Me, in 2010: Internet Wiretap Bill Misses the Mark

So if this bill becomes law, it will accomplish precisely the opposite of its stated purpose. The government will still be powerless to eavesdrop on criminal and terrorist communications. Meanwhile, the good, honest citizen will be rendered powerless as well.

Time Limits on Browser Plugins?

When Steve Gibson talked on Security Now 398 about how few users’ Java plugins are actually up-to-date, this question hit me:

Should browser plug-ins have built-in expiration dates?

The problem with having all of these old Java versions running around is that attacks always get better. How much more sophisticated are the attacks of today than the attacks of just one year ago? Why, then, should anyone think a free browser plugin released today—even if it’s secure by today’s standards—will stand up to the attacks of one year from now?

Fix the ecosystem…

Of course, vendors need to continue to do their best to write secure code in the first place, and release timely updates to fix errors that do make it into the wild. We also need to work on the ecosystem to make it easy for users to stay current—figure out what Apple is doing right, what Android is doing wrong, and how to apply those lessons to the browser plugin market. (I’m not just picking on Java—I’m thinking of Adobe Flash and Reader, too.) I’m not sure how to get end users to care about keeping these plugins up-to-date, but the problem deserves attention. Obviously, the major plugins now auto-update, which will help, but it’s not foolproof (I’m envisioning malware that intercepts update checks to keep vulnerable plugins in-the-wild longer).

…and build in a time limit

What I’m proposing is that vendors build in an expiration date as a safety net, so if a user tries to run a 12-month-old plugin (which won’t happen if auto-update is working and the vendor is still maintaining the product), it displays an expiration message and instructions for how to get a new version. Obviously this doesn’t solve our current problems, but it should be part of a strategy to make sure we’re not still in the same boat a few years from now.

Heads-Up for LinkedIn Users

If you have a LinkedIn account, stop what you’re doing and change your LinkedIn password immediately. I’m not kidding–just do it. Once you’re logged in, click on your name near the upper-right corner, click Settings from the menu, click the Account tab near the lower-left corner, and click Change password.

Now that you’ve changed your LinkedIn password, think about all of the other web sites where you have accounts–did you use the same (now-probably-hacked) password on any of those? If so, go change those, too (and don’t use the same password this time). If you use the same credentials across multiple sites, all an attacker needs is to crack one of them, and then (in principle) they own any other account with the same username and password.

Done? Great! So here’s what’s going on:

The social networking website LinkedIn is investigating claims that more than 6 million passwords were stolen and uploaded to a Russian-language web forum today.

That was yesterday, June 6. To be clear, it was actually cryptographic hashes of the passwords that were stolen–not the plain-text passwords themselves–but LinkedIn was using an insecure technique to generate the hashes (unsalted SHA-1). I won’t write here about why that’s so easy to crack–Steve Gibson had a good discussion about this in his Security Now! podcast, episode 356 (the transcript is not up on that page yet as of this writing, but he should have it posted soon). For some good guidance on choosing passwords that are resistant to the kind of attacks (“rainbow tables“) that are effective against unsalted hashing schemes, see Steve’s Password Haystacks page.

Internet Wiretap Bill Misses the Mark

Charlie Savage reported Monday in the New York Times that the Obama administration is seeking legislation that would require “back-doors” in all encryption products and services in the US. Of course, they cite terrorism as a primary motivation.

How best to balance the needs of law enforcement (and of government in general) with the privacy and liberty of the citizen is an age-old question. While I sympathize with the needs of law enforcement, the Internet wiretap plan simply will not accomplish its stated purpose.

When privacy advocates complain about video surveillance or airport screenings, the counter-argument has always been “If you’re not doing anything wrong, you don’t have anything to worry about.” (That argument assumes that law enforcement officers will use those systems only for their intended purposes, but we’ll leave that aside for now.) The point is that when you’re securing a place—a bank or airport, for example—the security measures apply equally to everyone who goes to that place.

But it’s different when you’re dealing with things. If you mandate that a certain type of thing T must have property P, and it’s illegal to make or possess a T without P, then law-abiding manufacturers will make their Ts with P, and law-abiding citizens will use Ts with P. But what’s to stop a criminal or terrorist from importing their Ts from a country without the stupid P-law? This turns the table to the bad guys’ advantage in two important ways.

First, the world already has robust, unbreakable, back-door-free encryption technology. The criminals will just use that. As with gun control legislation or nuclear non-proliferation treaties, if you outlaw strong encryption, only outlaws will have strong encryption.

Second, if a back door exists, the bad guys will figure out how to exploit it. History proves that. So not only will the bad guys have strong encryption that even the government can’t break, but the good guys will be forced to use encryption that the bad guys can break. It will be that much easier for them to steal money and identities. The law-abiding citizen and the government alike will be powerless to stop them.

So if this bill becomes law, it will accomplish precisely the opposite of its stated purpose. The government will still be powerless to eavesdrop on criminal and terrorist communications. Meanwhile, the good, honest citizen will be rendered powerless as well. That’s a situation truly to be terrified of.

When Low Tech Is the Best Tech

We’ve been thinking about developing a quick application to replace a paper HR process—should be a simple state machine with four possible states: Submitted, Accepted, Rejected, and Completed. But then we realized we would need email notifications and a coherent security model.

personnelchangerequestprocess

These requirements—workflow, notification, and security—happen reasonably well in the old paper model. Not perfectly, but well enough. These mechanisms are ingrained in the way people do their work, but to implement this in a computer application would require us to build it from scratch.

It quickly became more complicated than it was worth, a good reminder that sometimes low tech is the best tech.

The Enterprise Information Protection Paradigm

It used to be that network infrastructure was one of an organization’s most valuable assets and security was geared toward protecting the infrastructure; but costs are falling, and the network has become a commodity.

Meanwhile, the volume and value of information stored electronically are growing rapidly. For this reason, Dan Greer advocates a paradigm shift in information security, which he calls the Enterprise Information Protection Paradigm.

We suggest that this paradigm be called enterprise information protection (EIP). We say “enterprise,” in that, for most firms, data is literally who they are; “information,” …because this data has future value; and “protection” because protecting value is the first responsibility of boards and officers.

In practical terms, EIP means focusing our security efforts at the point of use—every point of use—“where data-at-rest becomes data-in-motion.” It means insisting on secure operating systems, applications, and procedures. And it means monitoring the use of information:

[EIP] is, to the firm, what a conscience is to an individual—that second brain that watches the first with the power to detect bad choices and to act on what it sees. We do not expect perfection in applying EIP any more than we expect perfection of the conscience, but … the goal is worth it.

Focusing security resources at the point of use is not a new concept—Bruce Schneier has advocated that as a technical security tactic for years. And it’s certainly not new to say information is an organization’s most valuable asset and that responsibility for information security goes all the way up to senior management. What I find compelling about this article is that it does a decent job of packaging these concepts together into a single, coherent paradigm.

Dan’s article is a bit long, and you have to slog through clichés like applying the theory of Evolution to information security (do they have Editors anymore?), but it’s worth a look.

The Spam That Got Through

All of my company’s inbound and outbound email goes through a security service that scans for spam and viruses. From time to time I get an email from someone saying that they got a message that they consider spam. I see that as a good sign. Here’s why:

Spam filters are machines, with some human input to fine-tune the filter criteria, doing the best job they can. The algorithms are ever-improving, but they’re still just computer programs.

Also, spam filters read mail, not minds—some of what they see looks enough like legitimate email that they are allowed to pass through. If I, a human, were reading our inbound email feed, I probably would allow many of the “spam” messages, too. It’s not possible for man or machine to know the mind of every recipient, how they would classify every message they receive.

And the humans that fine-tune the filter criteria tend to err on the side of caution: a false positive—deleting a sales lead, a message from an attorney, etc.—is far more costly an error than a false negative—the spam that got through.

According to the reports I get from our spam filtering service, 89% of our inbound email is deleted as spam, 1% is quarantined as likely spam, and the remaining 10% is delivered as normal email. That translates to about 2.7 million spam messages a year that never hit our inboxes. Under that kind of barrage, I’m surprised anyone finds it surprising when a single unwanted message sneaks through.

That’s what I consider a good sign: if end users are surprised when they get a single spam, it means our filters are doing a pretty darn good job.

I hope that puts things in perspective.