While I do live in the San Diego area, I was not downtown at the time of the infamous San Diego Fireworks “Epic Fail” on July 4. We were lazy this year, and just watched the San Diego Fair fireworks from the comfort of our hot tub. However, there were over 400,000 other people who got to witness the mother lode of poor coding –approximately 7000 fireworks simultaneously went up in smoke (literally and figuratively) due to a software glitch. The significant thing about this particular mishap (besides the fact that no one was injured) was that it provides a rather graphic representation of what can happen when a company has less than ideal secure coding practices.
In the security consulting world, we are constantly pontificating that companies should focus on writing secure code as well as properly testing it. Historically, many application developers have focused on functionality of their code, prioritizing the whistles and bells, while frequently overlooking the robustness of the code as well as how securely it was written. I strongly agree with the security pundits, such as Bruce Schneier, who say “I don’t care how it works, I just care how it can be broken.’
Clearly, in this case, there was something that was grossly overlooked. What can we learn from it?
The first thing I learned was that today’s elaborate fireworks displays are controlled by computers and computer programs, and not by brave souls who light the fuses of various fireworks. I guess that shouldn’t be surprising in this day and age, and maybe I’m the only kid on the block who was blissfully unaware.
The other thing I learned was about Garden State Fireworks, the New Jersey-based company that provides these services, and has been doing so for well over 100 years. Of course, the computers only came into the equation sometime within the past decade or two. In this case, the issue was not with the regular tried and true code that is used to launch the fireworks (and is presumably secure) , but instead was tied to a ‘back-up file’ that caused the process to go awry. The company claims that “an unintentional additional procedural step occurred in the loading process which allowed the creation of an anomaly that doubled the primary firing sequence”.
Therefore, apparently this issue was not tied to poor coding, or even poor code review. Instead, it was tied to an automated process that was less than ideal, and it managed to occur even though the probability of occurrence was minimal. I’d be willing to bet all the bottle rockets I launched as a kid that this fireworks company has already figured out a way to revamp their processes to now include verifying back-up files/processes.
How does this apply to companies who are trying to protect data instead of fireworks? Be vigilant in considering all possible failure vectors when reviewing your code and processes. Perhaps there are areas where you can improve this thought process within your Secure Development Life Cycle (SDLC). After all, it’s nice to watch other people’s fireworks, but you certainly don’t want them in your own organization, unless of course you are a fireworks company.