Monthly Archives: July 2014

Encrypting Portable Media on Windows

Why encrypt? That’s easy, small devices like USB flash drives and even smaller MicroSD flashes can get lost, easily. If that happens, why anguish over the fact that you may have exposed any of your own data, let alone data belonging to the Enterprise you work for (which might result in negative consequences for you).

You have a choice when it comes to encryption technologies, for example SanDisk Extreme USB 3.0 drives can be password protected. But, most of these solutions have at least one unattractive feature. In most cases, it requires software to enable the encryption, its not usable on all computers, its extremely slow or very expensive.

If you can live without super speed, the Apricorn line of encrypting storage devices offers the best system compatibility you can get (ie. It works with Windows, Mac OSX and Linux; essentially, anything that can read a FAT32 File-system). But at $65 for a 4GB USB 2.0 Drive, you might be asking yourself, “are there cheaper alternatives”.

Well, that depends, as long as you don’t have to cross platforms (ie go from Mac OS X to Windows or Windows to Mac OS X), yes, there is one built right into Windows 7 through 8.1. Linux has an encrypting file system and so does Mac OS X. I post instructions on each in turn.

First, lets start with Windows. You must have a Professional, Enterprise or Ultimate Edition of Windows 7, 8 or 8.1 to begin.

Encrypted volumes (USB Hard Drives, USB Flash Drives, SD, MicroSD, etc) can be read on any version of Windows 7 and above, but you must have a Pro, Enterprise or Ultimate Edition to encrypt the media to start.

Lets start with Windows and I will post Windows 8.0/8.1 a little later and the rest as time allows.

Please note before beginning, in some cases you can encrypt portable media and devices while there is data already on the device. Given that support for this can change over time, is not a universally accepted method… NEVER encrypt media with data already on it, unless you backup it first and expect to have to copy it back when done.

  1. Start by plugging in the USB or flash media you wish to encrypt. Then right click on it in Explorer. In my case, I am using a 32GB flash drive known as “G:”. If you have a version of Windows that is capable of encrypting the media, you will see the Turn on BitLocker… in the context menu. If you do not, then you do not have the appropriate version. If you do, select it to move on.
    RightClickBitLocker
  2. You will next be asked how to unlock the encrypted drive. Always select Use a password to unlock the drive, as the second option, using Smart Cards is advanced and likely not supported anywhere on campus at this time. So check the password box and type in your unlock password twice. You don’t have to use anything sophisticated, but it has to be a bit more complex then a birthdate, the names of family members, pets, your car, your dream car or essentially anything that would be easy to guess (except by you). Then hit Next
    ChoosePassword
  3. The next screen will ask you how to save the recovery key. This key can be used to recover data from the encrypted media if you forget your password. You can print it or save it as a file, its a matter of personal preference so choose which you are most comfortable with. However, there are some do’s and don’ts. For the saved file, don’t store it on the encrypted media, it can’t help you if you forget your password. For printed keys, please keep them private, taped to your keyboard or posted on a wall is not private even if it is in your office.
    SaveorPrint
  4. The next window gives you a few warnings and then lets you proceed by clicking Start Encryption button. This will take a while, the larger the device, the longer it will take. So be prepared for anything from 10 minutes to a day or more. My 32GB flash took a little over 10 minutes, your speed will vary based on how fast your computer is and what version of USB you are using. On the outside, something like a 1TB USB 2.0 drive on a 3 year old desktop, would likely take a day or more.
    ReadyToEncrypt
  5. The encryption process will show the progress.
    Encrypting
  6. And when its done, note the difference in how the icon for the media is displayed. It now has a lock on it. The lock will be closed when the media has not been unlock and open when it has.
    Encrypted
  7. Once this is all complete, if you right-click on the media again, you will see the Manage BitLocker menu item. From there you con manage device.
    ManageBitLocker
  8. One of the things on the BitLocker management window is the Automatically unlock this drive on this computer. Believe it or not, we recommend you select that for every computer you plan on using the media frequently. The main goal here is to ensure that, if you media device is stolen, the data on it is not accessible. Your ease of access to the encrypted data is very acceptable option. The only time this is not the case is when you are using a computer that is public or shared by many people who might not be so trustworthy.
    ManageWindow

Lastly, if you want to remove the encryption, there are a few things you can do, but, the most straight forward is to back up the data and then format the device. Always take care to back up the data before wiping any device.

There are some downsides…

Use on Windows XP requires a download from Microsoft and is not backward compatible beyond that. But you should have moved away from XP as its no longer supported.

As in all encryption and compression technologies a physical error on the media can cause significant damage to a file.

If you forget your password or loose the recovery key the data is generally unrecoverable. If you system administrator(s) have set up enterprise BitLocker with the recovery option and the workstation you used to create the encrypted volume is in the enterprise’s Domain, then the administrators can still recover the data for you, even if you forget your password or loose the recovery key.

Argghh…. Air Pirates… Part Deux!

OMG, this is both disturbing (Security-wise) and hilariously funny…

http://www.computerworld.com/s/article/9249872/AirMagnet_Wi_Fi_security_tool_takes_aim_at_drones?source=CTWNLE_nlt_dailyam_2014-07-22

AirMagnet has modified their software to detected rouge Wireless Access Points….. get ready for this… mounted on Parrot AR Drones. (Shown below).

ParrotARDrone

Ok, ok, everyone can see drones being used to spy on everyone, those little cameras mounted on them are bound to do their share of Peeping-Tom-ery. But, I have to admit, its a stroke of genius to use them for EMF snooping (like 802.11 Wireless).

Scope and Traffic Flow

When discussing with people the use of firewalls to filter or block traffic, I have often discovered it is difficult to explain exactly what options they have and how they work. In an effort to help clarify that…

Generally, you have to consider two things, scope and traffic flow.

Scope:

Scope can mean several things, but in essence, scope is a way to label things and what level of access you plan on letting them have. For instance, for Stony Brook, there several scopes. Ordered from lowest risk to highest. Sometimes scopes are also known as IP-ACLs (IP Access Control Lists).

  1. Private Subnets
    These are networks for which by definition they have limited access to other scopes (or none). You can add other scopes for them to have access to later, but this is a deliberate act and not the default.
  2. Selected Hosts
    A small list of computers, regardless of where they are.
  3. Subnet
    All the computers who are within your broadcast domain (ie. Local area network)
  4. Campus/Enterprise
    Computers on campus (or in an enterprise), but can be broken down to smaller scopes
    a) Main Campus
    b) UHMC
    c) Life Sciences
    d) Computer Science
    e) Resnet
    f) Wireless
    g) Others
  5. Internet
    Pretty much… everything else. But these can be scoped sometimes too. For example, by University, by corporation, by vendor in some cases geographically or by country.

In most cases, it can be hard to try and carve out sub-scopes from the Internet scope, as very often some of these scopes will change without notice. The same exists for other campus scopes, but they should change very rarely. Also, unless you are using a private subnet already, realize that most campus IP’s by definition are Internet Scoped and are therefore exposed directly to the unfiltered internet. This can be very problematic for your internal security. It is however, very common for at least, desktops and laptops to be scoped to either their local subnet or to their enterprise. Lastly, scope often determines your risk level.

Traffic Flow:

Traffic flow is really a determination of what traffic will pass across the firewall, often this is associated with a direction (Inbound or Outbound) and even then, for the purposes of this discussion, only relating to who starts the network conversation. For example, if you have a web server, clearly, you need both Inbound (a remote web browser will start the conversation) and Outbound traffic (The web server has to talk back to the web browser). However, a typical Desktop or Laptop computer, has no need of an outside computer starting a conversation with it, thus, its firewall should be set to disallow any unsolicited InBound traffic with respect to your needs and the scoping your needs requires.

We often categorize this as Server vs. Non-Server traffic.

There is some confusion here regarding this concept. For example, you may use Skype and you want people to start chatting with you. While in the strictest definition this makes Skype a kind of server (a Client-Server actually), Skype and software like it are not really servers. They have communication mechanisms that aren’t affected by “InBound” firewall rules unless we specifically go out of our way to block them.

Conclusion & Considerations:

Both scope and traffic flow can be used together in almost any combination. Sometimes conflicts or oversights can arise while deploying rules, so it always make sense to verify your firewall rules are working as expected once put in place and it is never a good idea to add rules remotely unless they are pre-tested for functionality (ie. More then one IT technician has locked themselves out of a device by applying a rule that was just wrong enough to cut them off from the system). Applying rules is best done while you are physically near the system being modified.

But, this does give us the tools with which we can customize the rules to a given set of computer’s needs while also giving them substantial protection.

So when we suggest that you consider placing your equipment behind a firewall or recommend changes to your firewall rule set, remember that we have a great deal of options and can work with you to not only increase your security but also work within your needs.

We do recommend a campus based scope. This gives your systems access to all campus resources while blocking out the baddies from outside of campus. This isn’t perfect as, there can be malware infected computers on campus that can be used to reach you despite your intentionally blocking out InBound unsolicited traffic from the Internet. Very often these computer’s are in ResNet and Wireless, but certainly are not limited to them.

Lastly, there is a school of thought that prefers you utilize active/dynamic defenses, IPS or IDS, versus using scoping rules (aka IP-ACLs) which are inherently static. We agree this is the best approach, however, if you do not have IPS or IDS, scoping rules are really your only defense. Even with IPS/IDS, scoping and limiting traffic flow adds another defensive layer should any other layer fail. Which in reality does happen because attack methodologies are dynamic and shifting. This enables them from time to time to evade some defenses for a short time; but, luckily, it is extremely rare that an attack can evade all defenses all the time. Which is why we recommend using the layered approach.

In The Chest of Every SysAdmin, Beats the Heart of a Tyrant!

This, my friends, is often how end-users view IT people, certainly, end-users and IT people view Security people this way.

It think, often, unfortunately there is truth to this, mainly because of differing objectives and goals.

First and foremost, the objective of most IT and security people, is, expedience. We want it done… and permanently off our plate or To-Do list and don’t want some 2 (or ‘n’) party solution that will only complicate matters. As such, neither of us is willing to sit down and discuss alternatives. It takes time, there is a tendency to believe the “end-user” simply doesn’t understand or is just plain wrong (which could be true, but at the same time, we aren’t helping the matter by not trying to explain our thinking).

If you take an engineering approach to these processes and realize that in most cases its a process of iteration and perfection over time and… yes… it can take a great deal of time to complete, then the confrontational stance each side takes will not be as dramatic, or even existent.

But of course, around here… time is a luxury most of us do not have.

The other two problems are language and visualization. Its hard enough for IT people in different areas of IT to understand one another or visualize things outside of their particular expertise, imagine the difficulty someone who is not IT oriented trying to do both.

I had a recent experience with a lab supervisor who was really not to keen on our recommendation that his systems go behind a firewall. I found myself trying to explain as simply as I could that, what we need to do is sit down with them, understand their systems and architecture and then formulate a plan that fits their needs and that we had plenty of options in this regard.

But the conversation didn’t start out that way. Thanks to my co-worker, Matt Nappi who did a lot of Eric-technobabble to English translation for the lab people, we were able to get the message across though.

Perhaps, what I am saying here, is two fold…

I certainly hope to practice these ideas myself, I have bad IT habits like many others in the field.

But it is also a call to other IT workers to do the same. The end-user may be often uninformed, but they still have valuable “from the field” information that is useful in finding a solution that they can be happy with and at the very least doesn’t cause them to be IT person averse when seeking solutions.

Moore’s Law is Dead, Long Live Moore’s Law…

There have been a few article’s surrounding IBM’s announcement that the perennial Moore’s Law is dead, or at least will be soon. It’s a controversial statement, but I’d say, one that most people trained as computer scientists (the old school kind that spent more time working at the interfaces between hardware and software in the early days of computing versus those who spend a lot of time inside environments and frameworks in the recent era), have to recognize as being accurate.

Moore’s Law is often misinterpreted as “Computing power doubles every 2 to 3 years”. The co-founder of Intel, Gordon Moore, “Moore’s Law” namesake, merely observed that the transistor density of silicon chips doubled about every 2 years. While this does translate often into computer performance, it has other implications then that alone.

Despite what appears to be the continued frenzied pace of development in all things IT related, people tend to forget all the underlying foundations of computer systems. So at first glance this statement of computer advances slowing seems ridiculous, but it merely refers to one kind of advance… which tends to have far reaching effects throughout IT.

I agree with IBM’s statement. The pace of any fundamental advances in “on die” electronics (ie. Processors/CPUs) has slowed to a crawl. Some may argue that is not true, but this is only because some people only look at the macro-advances. That is essentially, performance tricks… variations on a theme or discovering phenomena that can be exploited for specific use cases. In the end, none of these are fundamental shifts, they are simply iterative advances. And I am not knocking that, there is some pretty smart stuff in all that.

But let me illustrate both concepts.

About a decade or more ago, the same alarm was raised and nothing ever came of it.

Let start with the problem(s). There are two big physical limitations with processors.

  1. Electron Flow Through Wires
    When electricity runs down wires, it generates a small electric field. As you place wires closer and closer together, eventually the field of one wire will start to interfere with the other (and the other will do the same). So there is a physical limit on how close you can place wires before the interference becomes problematic. This is often referred to as Cross Talk.
  2. Controlling Electron Flow
    Signals in digital circuits are often controlled by the use of “logic gates”. These gates are basically switches that turn electricity on or off. The actual physics of “logic gates” goes something like this. I have a conductor (ie. Wire or something), that, when in one state, allows electricity to flow through it. When in another state, prevents electricity from flowing through it. This seems simple enough, but the gotcha is, depending on how the gate functions, it requires more power to either allow or stop the signal from flowing… and almost always the process generates heat. This is because, “resistance” is used, both a physic’s phenomena and an actual electronic component, that trades off the flow of power for heat (ie. to restrict the flow of power, the resistor will convert the power to heat and expel it as a by-product; this is why your home computer generates so much heat and why CPU’s have to be cooled by fans and heat sinks). The circuits also make other trade offs too (creation of electric fields to store the power, etc), but heat generation is the one we can easily experience and detect with our senses and has a huge impact on computer performance and power consumption.

So probably somewhere around 1996 or so, the computer processor engineers and scientists were faced with these two problems… they couldn’t make chips smaller (because of Cross Talk) and gate technology stalled because in order to create faster circuits, they needed more power, which in turn required the gates to use more power, which in turn generated much more heat.

Silicon chip technology was dead and the only alternative at the time, Gallium Arsenide, was both much more expensive and a potential environmental problem if it became a mass market technology. (Think, land fills full of the heavy metal Gallium and Arsenic [a poison], lovely right?)

So how was this death knell for Silicon avoided?

A smart physicist figured out how to fix the logic gate problem. In production, circuits are tested for speed. If they are too fast, the circuits are “doped” with electrons to slow them down. This way, if you buy a 3.0 Ghz processor, you get a 3.0 Ghz processor, not a 3.052345 Ghz processor.

Well, this physicist wondered… what if we “dope” the gates with something other then electrons… what about ions of elements? Well, he guessed right. After testing a few elements, if I recall correctly they started doping the gates with Xenon ions. In effect, this placed a “rough road” on the gate that would prevent electrons from leaking over the gate, when a small charge of electricity is applied, it would neutralize the ion’s effects and allow power to flow over the gate.

Along with other advances, like dealing more effectively with cross talk, this bought silicon technology another decade.

Unless someone figures something else like this out, eventually silicon will need to be replaced.

This is the big question…

If not… there are two technologies on the horizon, thankfully, neither is Gallium Arsenide.

Nano-photonics and Quantum Computing.

My opinion is, nano-photonics will dominate computers eventually.

As a trained computer scientist, I love the idea of Quantum Computing, and these kinds of computers will eventually be built, but there is a big difference in the application of photonics over quantum computing.

Quantum computing is more or less aimed as enabling us humans to effectively deal with, by current standards anyway, intractable math problems. This is important to many sciences, particularly engineering and physics. But it does not lend itself to more mundane things, like database applications, email, tweeting, virtualization, web servers or other run-of-the-mill IT stuff.

Although I have no doubt there will be some melding of the two, since many big IT companies right now have compute engines to analyze, categorize and predict things, about their users; from that perspective, they may have a use for Quantum Computing.

Certainly… crackers will… the bad guys, will *love* quantum computers… technically, cracking passwords *is* an intractable math problem. A quantum computer could theoretically make passwords and most kinds of encryption… well…. no longer intractable… possibly even… useless.

Isn’t that a cheery thought?