On Tigers and Teams
As many of you who read this blog know, in addition to doing security, I also like to go to zoos and take photos. (You can see my work here.) It is rare that this hobby overlaps with my life in security, but once in a while, it does. In May, I was in Washington, D.C. for a Sophos conference. After the conference (as one does), I went to the National Zoo.
I got there before opening and was taking photos. I saw these signs…
… and made a quick joke about the zoo not wanting people jumping in to pet the tigers. Then, however, I saw this, and everything changed:
Iterative Improvement and Data Leaks
Over the weekend, social media sites lit up with news about Edward J. Snowden’s travel plans. Leaving aside the questionable “wisdom” of keeping the government tracking you informed as to which countries you plan to flee to and which flights you plan to take, the practice of “stay one step ahead of your trackers” is a time-tested technique. Though it results in some rather humorous jokes, fundamentally, it works.
What also works is learning from the mistakes of others.
In this story, we have two individuals, Edward J. Snowden and Bradley Manning. We also have a fairly large organization that first trusted, then persecuted them. We can call this “the U.S. Military,” “the U.S. Government,” “the Obama Administration” or “the Intelligence Community.” However, all of these terms, I think, suggest a level of uniformity that doesn’t really exist. The various departments involved seem to operate more as fiefdoms than as cogs in a massive and efficient machine. Why?
Let’s look at the Manning story: In May 2010, he was arrested for providing data to WikiLeaks about the way the U.S. behaved during the conflicts in Iraq and Afghanistan. After his arrest, he was jailed and, according to some, treated quite poorly. While military treatment during detention differs significantly from that of private individuals, it is seems as though he:
- Was kept in a small cell with no window.
- Forced in a regimented sleep schedule.
- Required to remain visible at all time (no sheets, etc).
Later, he was classified as a suicide risk and, according to the rumors, had his clothing and glasses taken away and was forced to remain in his cell 24 hours a day. This classification was later lifted, and he was moved to a lower security prison.
Now, if you saw evidence of what you believed to be unethical behavior and felt obligated to report it, how would those consequences affect you? For many of us, it may well be enough to prevent us from leaking the information — but as Snowden shows, that’s not the case for everyone. Snowden’s flight to Russia and, if he ever gets there, to Cuba and then to points beyond, indicates that he saw the risks of his choice and is taking steps to address mitigate them. He may not be doing it very well, but it is important to note that he has learned from Manning and is trying to do a little bit better. Odds are high that the next person in this position will do even better.
This is because attackers gravitate towards open communication and learn quite quickly.
In contrast, we have a defense side that was taken twice by the same attack — three years apart. Sure, the details vary a bit, but all in all, sensitive data was taken off a system and the identity of the person that stole it was only made public because of their own action. In other words, three years after the first attack, the distributed departments that make up the U.S. government and military have NOT learned how to protect their data and identify the individuals that have access to it.
Granted, it’s a big problem. However, it’s also a big problem to not address this problem, as illustrated by the political risks being faced by the administration today. In general, attackers learn faster than defenders and then, during an event like this, the defenders have to expend even more resources attempting to either contain or respond to successful attacks.
What will things look like in three more years? Will we have a data leaker who is better at keeping anonymous? Will the data released be even more damaging to the government?
Who do you think will learn the Snowden lesson better? The next leaker or every department of the government?
History is not on the side of the defense.
A Security Lesson from the Dinosaurs
Last week, I got my copy of All Yesterdays (not the used Amazon versions, as the pricing algorithm is failing hilariously). I’ve been a fan of Darren Naish’s work since I discovered Tet Zoo years ago. It turns out that in addition to writing amazing articles on the cladistics of extinct crocodilians, he is also good at writing about paleo art.
You might think that paleo art is art done by prehistoric people, but no. In this case, it is art done to provide imaginative reconstructions of life from fossils. I imagine that most people these days are aware of the belief that many of the two-legged dinosaurs were feathered. However, as it often turns out, things are more complex than that. This book explores the history of dinosaur art and, along the way, draws on what we know about natural history, camouflage and mating habits of contemporary species.
So why am I posting this review on a blog that is (more or less) focused on information security?
Well, in addition to this book being about pretty pictures of dinosaurs, it is also about an industry working over time to make guesses about the truth, analyze their mistakes in the face of new evidence and, through a constant stream of screw ups, come closer and closer to consensus. As they’ve done this, everyone has had to constantly adjust to the shifting truth.
In effect, it is a book about evolution … the evolution of species … the evolution of understanding … and the evolution of the understanding of evolution, so to speak. This happens in all industries, but the younger the industry is, it seems, the less we like to acknowledge that we don’t have all the answers. In Information Security, we don’t like to be wrong and we particularly don’t like to be wrong in front of other people. This is understandable, as when we make a mistake in security, people could get hurt. However, when we don’t get a chance to discuss our mistakes as a community, we don’t get a chance to improve.
Today, there is some discussion in the community, but mostly within closed mailing lists and at conferences. Unlike in the realm of paleo art, our mistakes tend not to be public, so there are fewer eyes on them and fewer opportunities to get better. Fortunately, there are more hackers than professionals who draw dinosaurs, so we do get an advantage of numbers. Still, there is ample room for improvement.
This book explores the problems that arise from:
- Taking a superficial view of evidence
- Not comparing logical conclusions to examples of modern data
- Avoiding analysis and basing beliefs on the misguided work of others
- Looking strictly at hard evidence and ignoring behavior
- Hyper-focusing on dramatic scenarios
It’s that time again.
Whenever a major media event happens (like Hurricane Sandy), we are inundated with news. Sometimes that news is useful, but often it merely exists to create FUD… Fear, Uncertainty and Doubt. While I have not personally seen any malware campaigns capitalizing on the event yet, it is inevitable. The pattern is generally as follows:
- Event hits the news as media outlets try to one-up eachother to get the word out.
- People spread the warnings, making them just a little bit worse each time they are copied.
- Other people create hoaxes to ride the wave of popularity.
- Still other people create custom hoaxes to exploit the disaster financially.
A few minutes ago, at least in my little corner of the internet, we hit stage 3 when this image was posted:
( From here. )
Now, as someone who plays with photography, I was a bit suspicious, but as a security person, I can actually prove some things here.
The first tool I want to discuss is FotoForensics. Check out their analysis.
See how the statue of liberty and land on which she stands is much brighter than the background? That indicates that that image has been pasted on top of the other, so we know it’s fake.
Sometimes, though, this trick doesn’t work. If someone is making a good hoax, they can change the error levels to prevent easy detection. That’s where our next tool comes in. TinEye is awesome.
Look what happens when I do a reverse image search on the suspicious file here. (TinEye results expire after 72 hours, so if you’re slow to read this, just past the URL of the photo into their search box.)
TinEye, by default, is going to try to find the best match. But that’s not what we want. We want the original. Luckily, when people make hoaxes, they usually shrink the image to make it harder to find the signatures of a hoax. So we just click to sort by size and there we have what it likely the original:
ETA: Original can be found in this set by Mike Hollingshead.
Then it lists a bunch of sites that have stolen this image to use without credit. (That’s a different post.) You can then click on the “Compare” link for the likely original and see what they did. By flipping between the versions, you can see that they added the Statue of Liberty, the water and the boat. They also shrunk the image and made it darker… because darker is scarier, apparently.
The important thing to realize here is that the attacker is trying to manipulate you. By spreading fear, they are making you more susceptible to future attacks. By taking advantage of your uncertainty and doubt, they put you in a position where you will do unwise things to gain an element of certainty in your life. Does this matter that much in an image hoax? Probably not. But it does matter when you start getting fraudulent emails convincing you to “click here” to help victims of the hurricane.
Uncertainty and doubt can work against you, but it can also work for you. When the attacks come … likely in a few hours, approach them with suspicion. If you’re in the path of the storm, trust the names you recognize, like Google and The National Weather Service. If you’re not in the path of the storm and want to send aid, go with The Red Cross. If anyone else you don’t know asks for your money or your clicks, ask yourself what they have to gain.
Sprinting through Security
We’re all familiar with old school consultants. These are people hired at $20/hr and rented out for $150/hr. It’s good business, if you can get it. All too often, however, the work is neither enjoyable for the consultant nor useful to the client. After years of trying and failing to make the old model work, I decided it was time to throw it out and start over. Thankfully, RJS agreed. As of today, we are one year into the process of reinventing security consulting.
The fundamental difference is that we’ve fully embraced the fact that the idea of 100% security is a trap. You can never be completely secure, so why base a security project around the idea that you can be? Instead, we focus on achieving a measurable improvement over “today.” Different businesses have vastly different security needs, so once you shift the goal away from “find and fix all the problems” to “strike a balance between defense and response,” myriad solutions become available.
Having a large number of solutions is great, as we can select the one that fits your company’s unique situation the best. But remember, it’s not perfect and will need constant attention to avoid “analysis paralysis” and to stay current with new security trends. To combat this, we look at the second key difference: time-bound tasks, or as we call them, security sprints.
With anything you do, there is one resource that completely vanishes — time. Other consulting approaches focus on minimizing either money or an amorphous concept of risk. The catch with those is to clearly pre-identify “risk reduced” or “money saved,” time is required. Since time is billed, it can cost a significant amount of money to identify how much money you’re saving!
Want to know how much more secure a project will make you? Pick a small project that can be done in a week or two, do the project, then measure. There’s no guess-work, no scope-creep and most importantly, no spending more money than required to improve your defenses.
This process affords another advantage we did not anticipate. In many cases, security fails because the people put it in place to manage it are often not those responsible for maintaining it. Since security tends to weaken over time as attackers constantly improve, it is imperative that people explore alerts, identify what they mean to the business and take appropriate action. When the people who must manage the systems are not involved with the initial configuration, they tend to lose a lot of time tracking false alerts or worse, missing legitimate issues.
Since our consulting process is time-bound and focused on helping improve security after we leave, we work on a lot of small projects. These projects are designed so that, when done, they can be absorbed into the business’s existing operations. We then come back for iterative tuning engagements and, over time, help maximize the business’s use of technology. This avoids the common problem of security being “someone else’s” issue, while minimizing the disruption that new technologies can cause.
In the end, after a year’s experimentation, we’ve found that a cyclical short-project consulting model has given our clients a level of security far greater than the traditional defense-only approach. While this didn’t surprise us (after all, that’s why we did it), we were surprised to find that these engagements generally came in 25% to 50% lower in cost than the traditional model. We’ve done security assessments, implementations and strategy planning sessions and, in every case, have achieved better security at a lower cost.
Please contact us if you’d like to learn more about our sprint model and how it can help you achieve a better state of security at a fraction of the price.