Oh boy.. tough question… the answer unfortunately is a bit complex and often a tad offensive to some…. and I can’t address all the problems in a blog post, but here goes on an attempt to outline some of the fundamental issues…
There are many aspects to what makes things insecure, but as in all sciences, understanding the fundamental forces both simplifies the explanation and allows any understanding to have more forceful and wider reaching effect when searching for solutions.
As in most things, the answer typically often is, the problems you see are merely symptoms and the greater problem is the system or model is broken. Therefore, working to fix the symptoms, never really solves the problem. With that in mind, here goes… I will list them first and explain then second.
- Paradigm Shifts
- Domain and Range
Computer systems and IT itself from the early days, circa 60s to about the 80’s, is considerably different then how things are done today. For starters, networking computers together in a generic fashion and across wide geographical areas.
Earlier IT was mostly serial communications within an organization and over private closed circuit phone lines. In particular, an early problem in computers was that one crashing program could utterly disable a computer or significantly destabilize it… this problem has since mostly been fixed, but this plays a key but subtle role in modern security issues I will explain that next.
Until recently, education in security was, and still remains, often an after-thought. Additionally, the subtle issue is that, many of the people educated in the time period before the internet have had ZERO education in security and any knowledge was gained on the job through experience (thusly, no formalized or rigorous study with well organized material and accepted principals).
As an example to illustrate the point. When I was a Computer Science student, I was educated about a problem in programming called Bounds Checking. For the non-programmers, you don’t need to know what that is, but, my education of this concept amounted to this, “Check you bounds or your program may crash or become unstable”…. that’s it…
The problem is that this issue in programming is primarily responsible for the most common security flaw, the Buffer Overflow.
Domain and Range
More or less a mathematical concept, its important to understand that all things math, physics, chemistry, engineering and computer science are bound to it and it is very poorly understood by people in IT. I can go into the technical details, but I’d rather not. It is sufficient to say that these concepts are used to map an object’s transition from one state (or set of states) to another state (or other set of states). For example, we know, the sun rises in the east, it reaches its height at noon and then sets at the end of the day in the west. If at noon, the sun started moving back into the east… something is seriously wrong. So the Sun’s states are understood and anything else would be a problem. Surprisingly, programmers, both self-taught and formally educated, don’t often account for state changes which are abnormal or unexpected. (Obviously, self-taught programmers are more guilty of this then the formally educated kind)
Failure to handle unexpected states is exactly what security exploits target.
Bounds checking, as mentioned above, is one form of paying attention to Domain and Range because it forces the programmer to understand what is… and what is not, an acceptable state.
All software contains flaws. This is a direct result of incomplete attention to Domain and Range of the component pieces of a program. Some bugs will never be found or used maliciously, others will.
That being said, software in use for many years will contain flaws and continue to do so for many more years. So, software written in the time period where security education was nearly non-existent or a product of programmers not formally trained, is still in circulation. Any software based on that software, will suffer from any flaws as well.
Codebase, refers to that software that is ages old or used by many other pieces of software.
The codebase is large, some of it quite old, much of it heavily relied upon.
Connecting back to education, those educated in the early days are still in the workforce and will remain so for at least another 10 to 15 years. The cohort getting ready to replace them when they leave the workforce are only slightly, and iteratively, more knowledgeable. The cohort after them, is likely in a better situation, security wise, as having entered the educational system as security has become a front and center issue.
Unrealistic deadlines forces IT people to not fully understand a project and its technology, or rushing to complete a project by knowingly taking ill-advised short-cuts (with the likely intent of going back and fixing it later… which often never happens due to other time short deadlines). Deadlines are often formed by corporate goals. It is the lack of consideration of anything but corporate goals which tends to cause deadlines or maintenance life-cycles to also fail in the long term.
Security is a paradigm with an unattractive feature of having a short time period between changes. This is because security is always being gamed. To illustrate…
A thief walks into a house by an open door. The owner then closes the door. The next thief merely opens the door. Now the owner locks the door. The next thief enters an open window. The owner then closes and locks all the windows. The next thief smashes a window and enters anyway. The owner then puts bars on the window, the next thief picks the door locks. Finally, the owner then puts in an electronic security system to end the problem once and for all.
What does the next thief do… easy, he waits for the owner to come home, points a gun at him and enters the house at gun point.
The point here is, for every action the owner can take, the thief can find acceptable alternative options to meet his or her goals. This is called gaming the system.
It is also an example of paradigm shifting. Every time the owner changes his defensive tactics, the attacker changes their offensive tactics.
If you change a tactics without understanding the problem first and entertaining what possible new weakness any plan may introduce, you have a seriously flawed plan; one that can likely be gamed.
IT is often filled with acrimonious debate over which thing is inherently better… if someone fails to recognize that, they are mentally labeled a moron and grouped with similar morons. The morons create cliques or camps and they will rarely acknowledge when differing opinion is correct or holds any validity; as if doing so is personally costly or a betrayal of dearly held beliefs. Actually.. its quite like religion.
This effectively stifles any truly open and intellectually rigorous debates centered on trying to understand things instead of merely advancing ones own agenda for the sake of advancing one’s agenda. Solving problems becomes an act of trying to fit ones cherished technology choices into any problem, even if there are better solutions.
This incredibly hampers sound architectural engineering.
Couple these issues and a few others together and you have a system that is fundamentally incapable of addressing the underlying problems and only just “patches” as it moves along from problem to problem.
Of course, offering up some of the problems and no solutions is just as bad… so here is what needs to happen in order to fix the problems.
First, accept that as time goes on, doing what you did in the past will not be sufficient for the future. Accept that things will change, be flexible about it.
Secondly, education. Those entering the educational system have to be exposed to formal and rigorous material. Those out of the educational system have to have periodic professional development to stay current. That being said, those who do not have, or did not have, a specific focus on security in their education or professional development MUST defer (if not ordered to defer) to those who have. Please don’t take this as a sit-down and shut-up philosophy, what it means is, defer to the expert in the subject matter; professional interaction between experts in different disciplines with deference to one another is the ultimate goal here.
Next, domain and range along with codebase and cohorts present a problematic challenge. Codebases have to be reviewed or tested against current understandings of security issues and continued professional development. Moving from one technology to another to run away from a problem (or problems), only presents you with different problems; choices should be carefully considered (if not scientifically) and with due caution (and without bias).
Deadlines have to be more realistic. Project participants have to be given ample time to understand their projects unique issues and their most viable solutions. Pushing a project for high-visibility it a short-sighted and deadly game. Along with deadlines, organizations also have to be willing to allocate minimum resources. I have seen to many IT teams that are both understaffed and under funded, this is about as good a mix as an understaffed and under funded hospital… the outcomes will be random and quite poor.
Lastly, gaming and bias. This is the crux of the problem. It requires us to think ahead or even predict future behavior. This is incredibly difficult. But, surprisingly… easy to combat without having to predict anything. Bias, prevents us from recognizing flaws and possible solutions. It also hampers professional relationships by intellectually separating groups within an organization into US and THEM groups who overtly and covertly sabotage each other. Attackers WILL target such divides and biases. It is a fundamental problem between groups who lack coherency versus individual actors who have no such requirement, or naturally form groups that bind together solely by some coherency. For example, consider a bank with a dysfunctional corporate culture that allows political self-interest regarding career advancement to dominate business decisions versus some group of Russian 20-something’s who want to get rich. The 20-somethings can exploit the bank’s dysfunction because it really isn’t minding the store so to speak and the attackers are much more focused on meeting their collective goal. (This has actually happened BTW).
Combatting gaming is straight-forward. Just be organized, understand what is valuable to you… what is valuable to the attacker and architect systems that address this and are on a sound technical footing and you review periodically to be sure it meets current needs. Also, consider things that would make misuse hard to achieve. Learn… more about your surroundings and the capabilities of the technologies in it.
There is a concept in biology, called the Selfish-Herd principal. Basically, it states, if you and I are out on in the wild and we stumble upon a predator, like a lion, in order for me to survive, I don’t have to outrun the lion… I only have to outrun you.
The concept of making misuse hard… lends itself to this principal. Attackers are predators. They will always tend to go for the easy catch. Even moderate security implementations can cause many predators to hunt elsewhere.
It’s really just that simple.
As for biases, if I had an answer for that I could likely help the world achieve lasting peace.
All I can say regarding it is this, first, keep as open a mind as possible, listen, even if you disagree and consider what you hear. Try not to be intransigent.
Intel trains its employees to, “Argue as if you were right, listen as if you were wrong”. I find that imminently practical and wise. In the end, no one is saying you must agree… the wisdom here is that you should, at least, listen as if there is possibility that something of value could be said.
Lastly, if that doesn’t convince you to stay open to possibility, I remind you of something Benjamin Franklin said to the other revolutionaries that founded the United States, “Gentleman, if we don’t all hang together, we shall surely all hang separately.”