Heads-Up for LinkedIn Users

If you have a LinkedIn account, stop what you’re doing and change your LinkedIn password immediately. I’m not kidding–just do it. Once you’re logged in, click on your name near the upper-right corner, click Settings from the menu, click the Account tab near the lower-left corner, and click Change password.

Now that you’ve changed your LinkedIn password, think about all of the other web sites where you have accounts–did you use the same (now-probably-hacked) password on any of those? If so, go change those, too (and don’t use the same password this time). If you use the same credentials across multiple sites, all an attacker needs is to crack one of them, and then (in principle) they own any other account with the same username and password.

Done? Great! So here’s what’s going on:

The social networking website LinkedIn is investigating claims that more than 6 million passwords were stolen and uploaded to a Russian-language web forum today.

That was yesterday, June 6. To be clear, it was actually cryptographic hashes of the passwords that were stolen–not the plain-text passwords themselves–but LinkedIn was using an insecure technique to generate the hashes (unsalted SHA-1). I won’t write here about why that’s so easy to crack–Steve Gibson had a good discussion about this in his Security Now! podcast, episode 356 (the transcript is not up on that page yet as of this writing, but he should have it posted soon). For some good guidance on choosing passwords that are resistant to the kind of attacks (“rainbow tables“) that are effective against unsalted hashing schemes, see Steve’s Password Haystacks page.

Converting to Project Connection Across Multiple Packages in SSIS 2012

I’m migrating a Business Intelligence project from SQL Server 2005 to SQL Server 2012. Microsoft has, overall, done a great job with their development and migration tools, and some of the new features of SQL 2012 are great and will save me a lot of time going forward. One neat new feature in SQL Server Integration Services (SSIS) is Project Connections: you can define a connection at the project level, and all packages in the project automatically inherit a reference to that connection.

So this project I’m migrating has maybe 40 packages, many of which had the same two connections (primary source application and the DW database). In SQL Server Data Tools, you can open a package, right-click on a connection, and “Convert to Project Connection.” So far, so good. Problem is, all those other packages that have a connection of the same name will not inherit the project connection because the local one overrides it (by design). And if you open another package and delete the local connection, every task and data flow component that used that connection gets the dreaded red “X” icon–they don’t automatically revert to the project-level connection with the same name. Best I can tell, the only way to fix it in SSDT is to reconfigure every one of those broken tasks and components. The Internet is full of articles showing how to convert a connection in one package, but nothing gave me any clue what to do with the other 39 packages. I couldn’t accept that I would have to do all that–there must be a better way.Continue reading “Converting to Project Connection Across Multiple Packages in SSIS 2012”

Generating a Range of Dates in MySQL

Working on a report from a MySQL database, I needed a table of all dates for the next year. With SQL Server (2005 and later) there’s a CTE/recursive method to do this pretty elegantly, but I couldn’t find anything similar for MySQL. All the solutions I found involved temporary tables, loops, and/or stored procedures–none of these were viable options for me because it’s a production database and I can’t just go changing things. I needed a simple query, and since I couldn’t find one, I made one.

It’s ugly, but it did the job.Continue reading “Generating a Range of Dates in MySQL”

Active Directory Single Sign-On for Linux Intranet Servers

I mentioned a while ago that I have a Linux web server set up with Kerberos SSO in our AD domain. Setting it up was a lot more tedious than it seems like it should have been. I found bits and pieces of useful information here and there, and some step-by-step guides to help with specific sub-tasks, but I couldn’t find a good, intranet-specific guide to help me understand the big picture—what pieces I needed (and didn’t need) and how they fit together. So here’s part 1 of my attempt to rectify that situation (part 2 will be the WordPress integration—I’m still working on that part).
Continue reading “Active Directory Single Sign-On for Linux Intranet Servers”

Intranet Milestone: Transparent Authentication

I’ve started a project to move the front-end of our intranet from SharePoint to WordPress (SP is just too icky to do any serious front-end work with). The plan is for WordPress to become the front-end and CMS for news-type content, keep SharePoint for file library and calendar-type stuff (at least for now), and use the SP web services to integrate the SP content into WP. All of the various authentications involved must be transparent to the end-user.

Goal #1 was to get all the Kerberos stuff worked out so that Apache would transparently authenticate users against Active Directory (assuming they’re logged into a Windows client machine with their domain account—a reasonable assumption for an intranet, although a good experience logging on from an iPad or other non-domain client is also disirable). It took a bit of trial-and-error, but I got it working! WooHoo!!!

Goal #2 will be to fire up WordPress and get it to recognize that Apache already knows who the user is, create a new WordPress account if it doesn’t already exist, and log the user into WordPress.

This should be fun…   Winking smile

The Problem With GTD

I’m a fan of David Allen’s Getting Things Done, but it suffers from one major shortcoming, at least for me: it offers some great methods for managing inputs and outcomes, but it is little help for managing knowledge in a usable electronic form, largely due to its reliance on paper as a least-common-denominator representation of ideas. Paper is inherently disconnected, and any given piece of paper can only be in one place at a time. It seems to me that these two factors are too constraining in today’s always-on world. That’s why I’m working on this project, to try to liberate as much as possible of the GTD process from paper while preserving the parts that work well for me. We’ll see how it goes.

[To be fair, GTD was published in 2002, before Twitter, RSS, Instapaper, Remember the Milk, iPhone, iPad, online banking, and ubiquitous connectivity. I doubt any of my nine-year-old work has held up so well in the face of such amazing change.]

Knowledge Work: Marshaling Inputs

I’m beginning a personal project to help me manage the barrage of different inputs I juggle every day. I know I’m not alone in this, so I’ll be sharing my thoughts here as I work through this project. I don’t know what form the end-result will take—could be software, could be a change of my habits or mindset, I don’t know.

Step 1 was to list my main sources of input—email, IM, our help desk application, etc. The list was dizzying, but as I stared at it in disbelief, two dimensions emerged:

  1. How synchronous is the communication medium?
  2. Is the information already in the form of text on my computer, that I can copy-and-paste wherever I need it?

Here’s the result:

image

Once I organized the list on those two axes, I noticed a couple of interesting things. First, IM, Twitter, and TXT messaging are sui generis, at least among my primary sources. Maybe that’s why these are also my favorite sources? Second, the disconnectedness of some media is a significant barrier to turning information into action.

Next steps:

  1. Listing desired results (file information for later retrieval; put on a to-do list; hold and wait for more information; open a help desk ticket; schedule a meeting; read later; correlate with other inputs; etc.).
  2. Analyzing the various paths from inputs to results.

Should be fun…

Beware the Limits of Reductionism

There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophies. – Shakespeare

I said before that generalization, patterns, and abstraction are powerful ideas, but they have their limits. It is useful to reduce a thing to its core principles, but beware! Taking reductionism too far you can lose the essence of the thing.

Poetry is far more than words and meter; man far more than mammal; and reality far more than relativity or quantum mechanics. There’s nothing wrong with thinking about music in terms of the temporal and harmonic relationships between sounds, but if you think of it as only that, you’ll never understand why some music moves you.

Generalization, Patterns, and Abstraction

εν αρχη ην ο λογος – John 1:1

Our world behaves in consistent, predictable ways. If it were not so, biology, chemistry, physics, mathematics, philosophy, economics, engineering, medicine, and countless other disciplines would simply not work. Every discipline studies a range of specific phenomena and aims to distill the detailed observations into general principles or patterns of behavior. It is this ability to generalize that allows us really to know anything at all.

In software engineering, certain problems come up repeatedly. How to sort a collection of numbers, how to manage allocation of system memory, and how to maintain good application performance as the number of users increases are just a few examples. Some very smart people have been working on these problems for a long time, and their documented best practices for solving these problems are called Design Patterns (or sometimes just Patterns). Other disciplines have similar patterns—in mathematics, for example, someone might work out an elegant method for using matrix arithmetic to solve systems of differential equations. Similarly in medicine, physicians follow best practices for diagnosis and treatment.

We’ve seen how each discipline develops patterns to understand and solve problems within its own domain—and many of these, like General Relativity, are truly powerful in their own right. But what has been fascinating me lately is abstraction—taking an already powerful idea and stripping away the domain-specific parts, until you’ve liberated the core idea, the pattern-behind-the-pattern. Then you begin to see how that core pattern applies across wildly divergent disciplines. In the beginning was the Logos—the organizing principle on which everything else rests.

I’ll go into more detail in future posts, but just take redundancy as one quick example: we find redundancy at work correcting errors in computer hard drives and on networks, improving readability in the English language, correcting mutations in DNA, functionally replacing damaged neurons, stabilizing financial markets, and in many other places.

Powerful Ideas

Ideas are the most basic of tools with which we understand and influence our world. And like tools, not all ideas are created equal—some ideas are more powerful than others. What makes an idea powerful?

  • A powerful idea conforms to absolute truth—the way the world actually is, not necessarily the way we think it is or want it to be.
  • A powerful idea has a broad scope—it can be appropriately applied across a variety of disciplines, explaining diverse phenomena or solving diverse problems (or, if you prefer to put it this way, solving the same problem across diverse domains).
  • A powerful idea is elegant, accomplishing its explanatory or functional purpose with a minimum of “moving parts.”
  • A powerful idea is fundamental—it is a solid foundation on which to build other ideas.

This post is the first in a series called Powerful Ideas, which will explore several of these ideas, from a variety of perspectives. I have broad interests—technology, photography, religion, science, philosophy, mathematics, music, language, and psychology—and I have been thinking a lot lately about the common threads that run through some or all of these. I hope that means I’ll have something interesting to say.

Internet Wiretap Bill Misses the Mark

Charlie Savage reported Monday in the New York Times that the Obama administration is seeking legislation that would require “back-doors” in all encryption products and services in the US. Of course, they cite terrorism as a primary motivation.

How best to balance the needs of law enforcement (and of government in general) with the privacy and liberty of the citizen is an age-old question. While I sympathize with the needs of law enforcement, the Internet wiretap plan simply will not accomplish its stated purpose.

When privacy advocates complain about video surveillance or airport screenings, the counter-argument has always been “If you’re not doing anything wrong, you don’t have anything to worry about.” (That argument assumes that law enforcement officers will use those systems only for their intended purposes, but we’ll leave that aside for now.) The point is that when you’re securing a place—a bank or airport, for example—the security measures apply equally to everyone who goes to that place.

But it’s different when you’re dealing with things. If you mandate that a certain type of thing T must have property P, and it’s illegal to make or possess a T without P, then law-abiding manufacturers will make their Ts with P, and law-abiding citizens will use Ts with P. But what’s to stop a criminal or terrorist from importing their Ts from a country without the stupid P-law? This turns the table to the bad guys’ advantage in two important ways.

First, the world already has robust, unbreakable, back-door-free encryption technology. The criminals will just use that. As with gun control legislation or nuclear non-proliferation treaties, if you outlaw strong encryption, only outlaws will have strong encryption.

Second, if a back door exists, the bad guys will figure out how to exploit it. History proves that. So not only will the bad guys have strong encryption that even the government can’t break, but the good guys will be forced to use encryption that the bad guys can break. It will be that much easier for them to steal money and identities. The law-abiding citizen and the government alike will be powerless to stop them.

So if this bill becomes law, it will accomplish precisely the opposite of its stated purpose. The government will still be powerless to eavesdrop on criminal and terrorist communications. Meanwhile, the good, honest citizen will be rendered powerless as well. That’s a situation truly to be terrified of.

When Low Tech Is the Best Tech

We’ve been thinking about developing a quick application to replace a paper HR process—should be a simple state machine with four possible states: Submitted, Accepted, Rejected, and Completed. But then we realized we would need email notifications and a coherent security model.

personnelchangerequestprocess

These requirements—workflow, notification, and security—happen reasonably well in the old paper model. Not perfectly, but well enough. These mechanisms are ingrained in the way people do their work, but to implement this in a computer application would require us to build it from scratch.

It quickly became more complicated than it was worth, a good reminder that sometimes low tech is the best tech.

The Enterprise Information Protection Paradigm

It used to be that network infrastructure was one of an organization’s most valuable assets and security was geared toward protecting the infrastructure; but costs are falling, and the network has become a commodity.

Meanwhile, the volume and value of information stored electronically are growing rapidly. For this reason, Dan Greer advocates a paradigm shift in information security, which he calls the Enterprise Information Protection Paradigm.

We suggest that this paradigm be called enterprise information protection (EIP). We say “enterprise,” in that, for most firms, data is literally who they are; “information,” …because this data has future value; and “protection” because protecting value is the first responsibility of boards and officers.

In practical terms, EIP means focusing our security efforts at the point of use—every point of use—“where data-at-rest becomes data-in-motion.” It means insisting on secure operating systems, applications, and procedures. And it means monitoring the use of information:

[EIP] is, to the firm, what a conscience is to an individual—that second brain that watches the first with the power to detect bad choices and to act on what it sees. We do not expect perfection in applying EIP any more than we expect perfection of the conscience, but … the goal is worth it.

Focusing security resources at the point of use is not a new concept—Bruce Schneier has advocated that as a technical security tactic for years. And it’s certainly not new to say information is an organization’s most valuable asset and that responsibility for information security goes all the way up to senior management. What I find compelling about this article is that it does a decent job of packaging these concepts together into a single, coherent paradigm.

Dan’s article is a bit long, and you have to slog through clichés like applying the theory of Evolution to information security (do they have Editors anymore?), but it’s worth a look.

Recipe: Missile Burgers

I was blown away by how good these are. I inherited this recipe from a friend. I can’t find it anywhere on the Web, so I’m putting it here. Enjoy!

Ingredients:

  • 1 lb Hamburger
  • 1/2 cup Ketchup
  • 1 Tbsp Mustard
  • 1 tsp Worcestershire sauce
  • 1/2 tsp Chili powder
  • Onion powder and garlic powder to taste
  • Hamburger buns
  • Shredded cheddar cheese

Directions:

  1. Brown hamburger; drain.
  2. Add ketchup, mustard, worcestershire, chili powder, onion powder, and garlic powder to hamburger and mix.
  3. Place halves of hamburger buns open-face on cookie sheet.
  4. Spoon hamburger mixture onto half-buns; top with shredded cheddar cheese.
  5. Bake at 350° until cheese is melted.

Book Recommendation: Getting Things Done

Stressed at work? I highly recommend Getting Things Done by David Allen. The main thing I learned from GTD was how to manage my email—keeping my inbox empty and using a single folder for archived messages. It’s been several months, and I need to read it again, but even the few tips I remember from my first reading have helped me manage an ever-increasing workload without a mental meltdown.

The Spam That Got Through

All of my company’s inbound and outbound email goes through a security service that scans for spam and viruses. From time to time I get an email from someone saying that they got a message that they consider spam. I see that as a good sign. Here’s why:

Spam filters are machines, with some human input to fine-tune the filter criteria, doing the best job they can. The algorithms are ever-improving, but they’re still just computer programs.

Also, spam filters read mail, not minds—some of what they see looks enough like legitimate email that they are allowed to pass through. If I, a human, were reading our inbound email feed, I probably would allow many of the “spam” messages, too. It’s not possible for man or machine to know the mind of every recipient, how they would classify every message they receive.

And the humans that fine-tune the filter criteria tend to err on the side of caution: a false positive—deleting a sales lead, a message from an attorney, etc.—is far more costly an error than a false negative—the spam that got through.

According to the reports I get from our spam filtering service, 89% of our inbound email is deleted as spam, 1% is quarantined as likely spam, and the remaining 10% is delivered as normal email. That translates to about 2.7 million spam messages a year that never hit our inboxes. Under that kind of barrage, I’m surprised anyone finds it surprising when a single unwanted message sneaks through.

That’s what I consider a good sign: if end users are surprised when they get a single spam, it means our filters are doing a pretty darn good job.

I hope that puts things in perspective.