Category Archives: Computer Techonolgy

Oh the Tangled Web…

Regarding security failings…

The long and the short… the vast majority of security failings fall into these general categories… both behavioral and technical.

  1. Slow, Bad or Non-existent Patch Cycle
  2. Poor CONFIGURATION decisions
  3. Installing to much software of debatable value (and forgetting about it).
  4. The required use of legacy systems with ZERO compensating controls
  5. Fire-and-forget mentality (i.e. long term complacency or laziness)
  6. Not knowing your limits, unwilling to accept limits, being too cheap or not having enough funding (which all has to do with heeding your local professionals, paying for some or living with centralized services which might not be as flexible [for similar reasons]).
  7. No situational awareness of the current network’s configuration or defenses, again, an assumption of defacto protection when there is generally near zero protection. This also falls under the category of assuming someone else is defending you (that always makes me smile because it’s just… soooo wrong…).
  8. Being unware that some of your habits/behaviors, for good or ill, contribute deeply to your susceptibility (poor passwords, never changing them, having accounts on every website or service under the sun with the same passwords, etc).
  9. The very wrong assumption that no one wants to hack your machine (hint: they couldn’t give a rats-*ss about you or your unimportant data [selfies, cat pictures, great American Novel, etc]… it’s the equipment they want access too, you are barely part of the equation).

In all cases… bad decisions, foolish assumptions and fatal mistakes.

Make noooo mistake on this… your devices are targets, mainly because they are a working piece of equipment built on a lot of risky decisions and false assumptions.

In the end, I would say, heed experts. But this is not enough, you have to be able to think a little too and live within your limits. Technology is not perfect… precisely because people are not perfect (scary thought eh?) So you have to assume all your technology can betray you at some point (and someone else’s as well); so it’s best not to have blind faith in your device’s ability to protect you if you are not also an active part of it’s defense.

U of Witwatersrand Strikes again…

Often I get an idea and it seems the fact that others get it too kind of confirms somewhat that it was a pretty good idea. Ever mindful of a Friedrich Nietzsche quote, however…

The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.

At least keeps me from thinking its a great idea. Great ideas would have to be unique, or even revolutionary. It also makes me mindful that, perhaps… just perhaps… maybe it wasn’t such a good idea after all. Anyhow, as always, history will be the judge.

To wit, a few blog posts ago (on another blog) I mentioned I bought a Wandboard for one of my projects. I have to say, the thing has grown on me; I love it. Its become my little workhorse.

While shopping around for it, I thought, given its specs, it might make for a good cluster node. Being a bit more powerful then the Udoo Quads I own, which would be another good choice for a low-rent cluster as well. Anyhow, I stumbled across the University of Witwatersrand‘s High Energy Physics Group’s Wandboard cluster and thought, “…well ok then… someone else thought so too”.

So, yay, my nVidia Jetson TK1 arrived today, woohoo… 192 CUDA cores, it has most of the same important specs of the Wandboard, plus… 192 CUDA GPU cores! Given the state of computational science these days and everyone running to GPU’s for math or computation heavy tasks, I thought… hmm… might make a nice cluster node.

nVidia's Jetson TK1 Board

nVidia’s Jetson TK1 Dev board

Turns out, darn it… U of Witwatersrand thought so too… they just installed an 11 node Jetson cluster and are currently running performance tests on it. They are speculating that they are going to get some 3850 GFLOPS out of the cluster.

Hummpff… go figure… wish I had 11 Jetsons. 🙂

Parallella Online and Being Tested..

Finally, after hunting for a decent power supply and attempting to navigate through Adapteva’s somewhat, detail lacking, documentation, my Parallella’s are finally churning away.

In picture, Rho, the first of two nodes, running the Blobubska real-time ray-tracing demo.

Parallella Running Blobubska Demo

First thoughts, its a little sluggish, but realize, any similar OpenGL demo would be operating on a graphics card like a nVidia Tesla with 512 Cores while this little puppy is doing a respectable job on only 16 Epiphany cores. I am eager to see how the 64-Core Parallella handles the same demo when Adapteva finally releases it.

My goal of this test, was mostly to measure the operating temperature performance with a medium load. Adapteva suggests, anything below 70C is good.

Since this is essentially a Rev A board, its lacking in some features that the Rev P boards have, namely a heat sink slab for both the Zynq FPGA and the Epiphany-16, I have modified the setup slightly, I added my own heat sink to the Epiphany chip and I placed a 5V fan to blow across both heat sinks.

The good news, the improvements worked better then expected. Its been running all day on the Blobubska demo and hasn’t gotten above 59.1C (sitting in an already hot office). As well, system load has maxed at 0.31 (as compared to running the demo in HostOnly mode [i.e. runs without the Epiphany chip] at a load of 1.5 and terribly sluggish video by comparison). Heat and load monitor below…


What was funny, since I am the paranoid security type, I always keep a close eye on /var/log/auth.log to see if anyone is banging on the box. Between the Parallella and my Raspberry Pi Model B+, I’ve had to block 7 networks from China, one from Denmark and one from Spain.

Damn crackers picking on my poor little computers.

– Eric

Encrypting Portable Media on Windows

Why encrypt? That’s easy, small devices like USB flash drives and even smaller MicroSD flashes can get lost, easily. If that happens, why anguish over the fact that you may have exposed any of your own data, let alone data belonging to the Enterprise you work for (which might result in negative consequences for you).

You have a choice when it comes to encryption technologies, for example SanDisk Extreme USB 3.0 drives can be password protected. But, most of these solutions have at least one unattractive feature. In most cases, it requires software to enable the encryption, its not usable on all computers, its extremely slow or very expensive.

If you can live without super speed, the Apricorn line of encrypting storage devices offers the best system compatibility you can get (ie. It works with Windows, Mac OSX and Linux; essentially, anything that can read a FAT32 File-system). But at $65 for a 4GB USB 2.0 Drive, you might be asking yourself, “are there cheaper alternatives”.

Well, that depends, as long as you don’t have to cross platforms (ie go from Mac OS X to Windows or Windows to Mac OS X), yes, there is one built right into Windows 7 through 8.1. Linux has an encrypting file system and so does Mac OS X. I post instructions on each in turn.

First, lets start with Windows. You must have a Professional, Enterprise or Ultimate Edition of Windows 7, 8 or 8.1 to begin.

Encrypted volumes (USB Hard Drives, USB Flash Drives, SD, MicroSD, etc) can be read on any version of Windows 7 and above, but you must have a Pro, Enterprise or Ultimate Edition to encrypt the media to start.

Lets start with Windows and I will post Windows 8.0/8.1 a little later and the rest as time allows.

Please note before beginning, in some cases you can encrypt portable media and devices while there is data already on the device. Given that support for this can change over time, is not a universally accepted method… NEVER encrypt media with data already on it, unless you backup it first and expect to have to copy it back when done.

  1. Start by plugging in the USB or flash media you wish to encrypt. Then right click on it in Explorer. In my case, I am using a 32GB flash drive known as “G:”. If you have a version of Windows that is capable of encrypting the media, you will see the Turn on BitLocker… in the context menu. If you do not, then you do not have the appropriate version. If you do, select it to move on.
  2. You will next be asked how to unlock the encrypted drive. Always select Use a password to unlock the drive, as the second option, using Smart Cards is advanced and likely not supported anywhere on campus at this time. So check the password box and type in your unlock password twice. You don’t have to use anything sophisticated, but it has to be a bit more complex then a birthdate, the names of family members, pets, your car, your dream car or essentially anything that would be easy to guess (except by you). Then hit Next
  3. The next screen will ask you how to save the recovery key. This key can be used to recover data from the encrypted media if you forget your password. You can print it or save it as a file, its a matter of personal preference so choose which you are most comfortable with. However, there are some do’s and don’ts. For the saved file, don’t store it on the encrypted media, it can’t help you if you forget your password. For printed keys, please keep them private, taped to your keyboard or posted on a wall is not private even if it is in your office.
  4. The next window gives you a few warnings and then lets you proceed by clicking Start Encryption button. This will take a while, the larger the device, the longer it will take. So be prepared for anything from 10 minutes to a day or more. My 32GB flash took a little over 10 minutes, your speed will vary based on how fast your computer is and what version of USB you are using. On the outside, something like a 1TB USB 2.0 drive on a 3 year old desktop, would likely take a day or more.
  5. The encryption process will show the progress.
  6. And when its done, note the difference in how the icon for the media is displayed. It now has a lock on it. The lock will be closed when the media has not been unlock and open when it has.
  7. Once this is all complete, if you right-click on the media again, you will see the Manage BitLocker menu item. From there you con manage device.
  8. One of the things on the BitLocker management window is the Automatically unlock this drive on this computer. Believe it or not, we recommend you select that for every computer you plan on using the media frequently. The main goal here is to ensure that, if you media device is stolen, the data on it is not accessible. Your ease of access to the encrypted data is very acceptable option. The only time this is not the case is when you are using a computer that is public or shared by many people who might not be so trustworthy.

Lastly, if you want to remove the encryption, there are a few things you can do, but, the most straight forward is to back up the data and then format the device. Always take care to back up the data before wiping any device.

There are some downsides…

Use on Windows XP requires a download from Microsoft and is not backward compatible beyond that. But you should have moved away from XP as its no longer supported.

As in all encryption and compression technologies a physical error on the media can cause significant damage to a file.

If you forget your password or loose the recovery key the data is generally unrecoverable. If you system administrator(s) have set up enterprise BitLocker with the recovery option and the workstation you used to create the encrypted volume is in the enterprise’s Domain, then the administrators can still recover the data for you, even if you forget your password or loose the recovery key.

Moore’s Law is Dead, Long Live Moore’s Law…

There have been a few article’s surrounding IBM’s announcement that the perennial Moore’s Law is dead, or at least will be soon. It’s a controversial statement, but I’d say, one that most people trained as computer scientists (the old school kind that spent more time working at the interfaces between hardware and software in the early days of computing versus those who spend a lot of time inside environments and frameworks in the recent era), have to recognize as being accurate.

Moore’s Law is often misinterpreted as “Computing power doubles every 2 to 3 years”. The co-founder of Intel, Gordon Moore, “Moore’s Law” namesake, merely observed that the transistor density of silicon chips doubled about every 2 years. While this does translate often into computer performance, it has other implications then that alone.

Despite what appears to be the continued frenzied pace of development in all things IT related, people tend to forget all the underlying foundations of computer systems. So at first glance this statement of computer advances slowing seems ridiculous, but it merely refers to one kind of advance… which tends to have far reaching effects throughout IT.

I agree with IBM’s statement. The pace of any fundamental advances in “on die” electronics (ie. Processors/CPUs) has slowed to a crawl. Some may argue that is not true, but this is only because some people only look at the macro-advances. That is essentially, performance tricks… variations on a theme or discovering phenomena that can be exploited for specific use cases. In the end, none of these are fundamental shifts, they are simply iterative advances. And I am not knocking that, there is some pretty smart stuff in all that.

But let me illustrate both concepts.

About a decade or more ago, the same alarm was raised and nothing ever came of it.

Let start with the problem(s). There are two big physical limitations with processors.

  1. Electron Flow Through Wires
    When electricity runs down wires, it generates a small electric field. As you place wires closer and closer together, eventually the field of one wire will start to interfere with the other (and the other will do the same). So there is a physical limit on how close you can place wires before the interference becomes problematic. This is often referred to as Cross Talk.
  2. Controlling Electron Flow
    Signals in digital circuits are often controlled by the use of “logic gates”. These gates are basically switches that turn electricity on or off. The actual physics of “logic gates” goes something like this. I have a conductor (ie. Wire or something), that, when in one state, allows electricity to flow through it. When in another state, prevents electricity from flowing through it. This seems simple enough, but the gotcha is, depending on how the gate functions, it requires more power to either allow or stop the signal from flowing… and almost always the process generates heat. This is because, “resistance” is used, both a physic’s phenomena and an actual electronic component, that trades off the flow of power for heat (ie. to restrict the flow of power, the resistor will convert the power to heat and expel it as a by-product; this is why your home computer generates so much heat and why CPU’s have to be cooled by fans and heat sinks). The circuits also make other trade offs too (creation of electric fields to store the power, etc), but heat generation is the one we can easily experience and detect with our senses and has a huge impact on computer performance and power consumption.

So probably somewhere around 1996 or so, the computer processor engineers and scientists were faced with these two problems… they couldn’t make chips smaller (because of Cross Talk) and gate technology stalled because in order to create faster circuits, they needed more power, which in turn required the gates to use more power, which in turn generated much more heat.

Silicon chip technology was dead and the only alternative at the time, Gallium Arsenide, was both much more expensive and a potential environmental problem if it became a mass market technology. (Think, land fills full of the heavy metal Gallium and Arsenic [a poison], lovely right?)

So how was this death knell for Silicon avoided?

A smart physicist figured out how to fix the logic gate problem. In production, circuits are tested for speed. If they are too fast, the circuits are “doped” with electrons to slow them down. This way, if you buy a 3.0 Ghz processor, you get a 3.0 Ghz processor, not a 3.052345 Ghz processor.

Well, this physicist wondered… what if we “dope” the gates with something other then electrons… what about ions of elements? Well, he guessed right. After testing a few elements, if I recall correctly they started doping the gates with Xenon ions. In effect, this placed a “rough road” on the gate that would prevent electrons from leaking over the gate, when a small charge of electricity is applied, it would neutralize the ion’s effects and allow power to flow over the gate.

Along with other advances, like dealing more effectively with cross talk, this bought silicon technology another decade.

Unless someone figures something else like this out, eventually silicon will need to be replaced.

This is the big question…

If not… there are two technologies on the horizon, thankfully, neither is Gallium Arsenide.

Nano-photonics and Quantum Computing.

My opinion is, nano-photonics will dominate computers eventually.

As a trained computer scientist, I love the idea of Quantum Computing, and these kinds of computers will eventually be built, but there is a big difference in the application of photonics over quantum computing.

Quantum computing is more or less aimed as enabling us humans to effectively deal with, by current standards anyway, intractable math problems. This is important to many sciences, particularly engineering and physics. But it does not lend itself to more mundane things, like database applications, email, tweeting, virtualization, web servers or other run-of-the-mill IT stuff.

Although I have no doubt there will be some melding of the two, since many big IT companies right now have compute engines to analyze, categorize and predict things, about their users; from that perspective, they may have a use for Quantum Computing.

Certainly… crackers will… the bad guys, will *love* quantum computers… technically, cracking passwords *is* an intractable math problem. A quantum computer could theoretically make passwords and most kinds of encryption… well…. no longer intractable… possibly even… useless.

Isn’t that a cheery thought?