Whole Home Audio for the Nomadic Lifestyle

I’ve always liked the idea of a whole-home (or apartment) audio setup.  Something about being able to walk around and be able to continue listening to something without having headphones in just seems to really appeal to me. However, as a young person constantly moving for college, work, etc. and some pretty tough requirements for a system, attaining this sort of thing for myself was easier said than done.

Background

Once I had started to consider this for myself, I had seen two setups I really liked: both had wired speakers in the ceiling of each room and one had a central hub in an entertainment closet that controlled the inputs, outputs, and volumes. The other had the input hidden away somewhere and each room had a wall control to adjust volume and turn the speakers on/off.  While both of these got the basic job done, I had some issues with each. The former offered some level of flexibility of inputs into the system, good for switching between a phone, tablet, TV, computer, etc. The distributed control of the latter also appealed to me, because how are you supposed to know what a good volume is for a room you’re not in? Unfortunately each of these systems were not without their faults (at least from my perspective). Primarily there was the price. Each system was in a pretty expensive house, and upon further investigation it turned out that these kinds of setups were indeed pricey. A close second was their portability. At a time when I didn’t expect to be anywhere for an extended period of time, I wanted to be able to take my setup with me, as well as not violate any renter’s agreements by running wired speakers into ceilings or putting holes in walls I didn’t own. Still relevant but less of a factor for me was the controls. While the in-room volume was a nice touch, they still both depended on physically being in one place to do the majority of administration. As someone who’s gotten used to moving between devices, laptop to phone to tablet and back again managing things remotely, the closet control panel felt a bit outdated.

SqueezeBox

Logitech Media Server

Luckily as time goes on, I discover Squeezeboxes. Although this was some time after Logitech has bought and subsequently discontinued the devices, I give them full marks for continuing to provide the server software. Additionally, the community never ceases to amaze me with their hard work, and a number of Squeeze-compatible software players were available.  While the physical devices being discontinued made for a bit of a barrier to entry, my subsequent discovery that squeeze players could run on Linux, Android, and FireTVs meant I could still do some testing, and since I already had devices, ended up being a cheaper alternative anyway.

Unfortunately I had a fair number of issues getting the Logitech Media Server up and running, at least on the low-powered Raspberry Pi I wanted to use long-term, so I hit a bit of a road block, at least until discovering Max2Play, which provided a full image and a click installer to get everything up and running.  Once that was set, I was able to scale up to 4 WiFi devices, plus the local player on the Pi, and was pleasantly surprised at how well they kept in sync.  Admittedly it wasn’t perfect, but with each playback device in a separate room, the difference was negligible.

One gripe I did have while going through this was the actual process of grouping different players together. In a point for versatility, any player could be individually playing something, or part of a group synchronized playback.  What was tough was adding and removing different players from groups because you had to be sure you had selected the right player with the correct playback stream, then remove it from the group.  In the event that your selections weren’t right, you were left wondering what you just did and why the player next to you was playing something from two days ago on it.  I made a mental note that if needed, I should try and code up something to leverage the Servers API and attempt to simplify the process.

In the end, this addressed a lot of my checklist: portability, cost, and decentralized administration.  Where it fell flat for me though was in it’s core function of synchronized audio playback and remote audio playback.  If one device took longer than another to buffer, the latency is painfully obvious and the only way to get them back in sync is to skip to another song and hope they fix themselves.  Additionally, the Media Server has the capability to stream external sources rather than just music files, but due to the amount of buffering required for this, the latency was considerable and made it somewhat painful to use the setup in this way.

One final update for anyone looking to try this out for themselves, with Docker taking over the world, or at least being as popular as it is, getting the Logitech Media Server up and running is easier than ever after a docker build.

Sonos

Sonos

As the Logitech system faded away, Sonos came into view.  I actually learned about them from the TWIT podcasts, where multiple hosts and guests exclaimed it as multi-room magnificence. Of course I had to check it out, and Sonos is indeed a fantastic solution for synchronized audio, with a price to match it. With some outstanding questions on implementation and the ability to tie into Sonos with other systems, I unfortunately had to move this speaker system into the “Maybe Later” pile.  And while their price now may be preventative, I think there’s a good chance that soon they’ll start offering additional value in their products that just might get me to justify the increased cost, but time will tell on this one.

Google Chromecast

Chromecast Audio

As I’m sure many people could foresee, this trail led me straight to the Google Chromecast Audio (CCA).  When the rumors first started I was intrigued.  Mostly on how they planned to implement the Chromecast experience in an audio-exclusive form.  Would it be over Bluetooth? Just WiFi? Will they synchronize? Questions and speculation swirled, then when they were released and all my questions answered, I figured I should probably wait and see how everything panned out (compatible applications were limited and synced Audios was still in the works), but then my girlfriend surprised me with one!

Initially it took some getting used to. I had become more accustomed to using Bluetooth speakers, but true to the Chromecast way it allowed for distributed control and operated independently of my device.  I set it up in the kitchen since it offered a nice place to seclude speakers and I figured it would be nice while cooking.  As a bonus, the layout of my apartment meant that these speakers could function decently across a large portion of it.  I was pleased with the response times and reliability of the device over time as well.

When I caught wind of the Chromecast app update that enabled Audio groups, I immediately purchased another one to test it out.  The UI was simple enough, and I shortly had a group made out of my two CCAs.  This worked incredibly, both tight, synchronous playback and snappy, responsive controls made it a joy to use.  Since this left me with music throughout my apartment except for one room, naturally a third was in my future.  I guess at this point I should note, this was all possible due to a number of speakers I had acquired over the years.  A few were in use as standalone units, and a few were lent to people who didn’t need them anymore.  They certainly aren’t the best quality speakers, but not being an “audiophile” certainly helps with that, and reusing them definitely keeps the cost down, and makes me feel better about keeping them around all these years.

Needless to say, I’ve been impressed with the Chromecasts, maybe even more so than my video one, I definitely use them more often.  Also, getting the audio coverage I did for under $100 (watch for sales!) is tough to beat when also considering the versatility of control and reliability of the groups (technically the Squeeze setup was 0 additional dollars but lacked these features).  As far as portability goes, I couldn’t ask for much more, unless maybe they made the CCA plug with a pass-through so together with the wired speakers you’d only need one outlet, but I won’t split hairs. All in all, the timing worked out great for me, and this audio solution has been exactly what I was looking for.

If anyone has any tips or tricks from their own experience, or alternatives that I haven’t yet found out about, please let me know! I’m always looking for ways to break improve my current setups =).

Yes, Iced Coffee is THAT Easy

With it being June in the Northeast, the heat has certainly picked up, and the Spring/Summer fever is in full swing.  One of my favorite parts about the warmer months is spending some time outside and enjoying my morning coffee, well, my morning iced coffee, it’s already hot enough out without a scalding mug in hand.  The only issue I had was that I needed to go out and get this liquid manna from somewhere, or it came out of a carton from the store.  Don’t get me wrong, I appreciate coffee shops as much as the next caffeine junkie, but I was also raised to take pride in my coffee brewing and currently safeguard a recipe that’s been perfected for over three generations of coffee brewers.  I also use a Keurig without batting an eye, so take from that what you will.  Regardless, I knew there must be a way for me to manufacture my own iced coffee, using grounds of my choosing, allowing me to save money and trips to stores while maintaining the pride of my brewing ancestry. Indeed there was, and damn was it easy.

 

TL;DR

Put grounds in water, let sit, strain, enjoy.

Instructions

 

Tools/Supplies

  • Container for steeping
  • Strainer/coffee filter/cheese cloth
  • Container for storing

Ingredients

  • Coffee grounds
  • Water
  • Anything else you want

Making iced coffee is a lot like making regular coffee, you can get it done with beans and water, but some people have their own tweaks and tricks.  I recommend starting off with a ratio of 1-1.5 cups of coffee to 4 cups of water and then tweaking it from there.  Simply add the two together in your steeping container, mix it up to make sure everything’s evenly disbursed, and refrigerate for 8-12 hours, again tweaking as needed.  I usually set this up at night, so that the fresh batch is ready in the morning, but maybe you want to do it throughout the day, so it’s ready for a grab-and-go in the morning.

Once your cold brew is ready, you then want to pour it out of the container and into the one you’ll store it in (I use a pitcher with a top, similar to something you keep lemonade in) and strain out all the grounds in the process.  I originally used a small strainer like this:

Small strainer

which worked well for most of the grounds but let some of the finer pieces through.  As I started making more at a time I started using a larger strainer with some finer mesh that succeeded in capturing more of the disposable grounds, but still was not perfect.  Others recommend using a coffee filter or cheese cloth as part of the process, but I’ve found I’m too impatient to wait for the coffee filter in the morning, and I didn’t have any cheese cloth.  The solution I arrived at which works well enough, is to just use the strainer as a best effort on the initial brew.  Everything that escapes it settles at the bottom of the container anyway, so it’s fine until you reach the end of the supply, at which point I use the strainer again when pouring the glass.  Most of the dregs remain at the bottom of the container and can be rinsed, and the strainer catches the few that would have made for a nasty surprise in my straw.  Anyway, once that’s done you can put a top of that and keep it refrigerated for awhile (haven’t had it last long enough to know how long it keeps!).

Drinking the iced coffee

Now that you’ve got the iced coffee, you have to drink it.  My personal strategy is put ice in a glass, pour in the coffee, then add cream and sugar to personal taste.  One thing I will note is that the cold brew has a higher coffee-water ratio than my normal cup of coffee, so it has a stronger taste, but the ice will actually dilute it a bit as it melts, so take that into consideration.  Others have suggested adding sweetened condensed milk or flavored syrups, but I haven’t ventured into making a syrup yet and the condensed milk didn’t really change the taste of mine enough to warrant the extra level of complexity. I’m sure there’s plenty of other ways to personalize your iced coffee, so feel free to let me know what you come up with!

You Get Linux! And You Get Linux! a.k.a. Some of My Linux Story

As alluded to in a previous post,  I originally became familiar with Linux out of necessity.  Since those days of running Ubuntu on an old desktop (and running a 50′ Ethernet cord across my house whenever I needed to install new packages or updates), Linux has popped up on all kinds of devices.  I think this phenomenon can be attributed to both the flexibility of the Linux platform, as well as the passion of the Linux community, but either way I’m glad that I, and countless others are able to reap the benefits of this versatility and hopefully contribute back in some small ways.

The Journey

From that first computer, I then discovered there was a port to the Nintendo DS.  While the setup was easy and straightforward, the resulting OS wasn’t terribly useful, except on one noted occasion in the Orlando airport where I was able to get it connected to one of their access points and entertain myself with the Lynx browser: mainly Google searching and checking my email, but a novel experience nonetheless.

DSLinux Image

Someone running DSLinux

 

My appreciation grew exponentially when I started using Linux as my daily OS.  I had configured my laptop to work as a dual boot with Windows, and my work laptop ran exclusively on RedHat.  Coupled with my investigation into “Plug” computers, and managing a few servers at work, it was a veritable submersion into the various flavors and utilities of Linux.

At this point, I knew enough to be dangerous, and I was having some delusions of grandeur on what I could make with a Raspberry Pi (first a Tomcat server, then a MythTV backend, but it’s most effective use at the time was a wireless bridge for my network switch).  I had more ideas than a single Raspberry Pi could handle and lacked the bank account to be able to scale up, so I diversified a bit.

While I had a number of these “headless” systems, what I didn’t have was a laptop attached to my hip 24/7.  However, at this point I had a smartphone (it even had a keyboard…with a tab key!) which might as well have been attached to my hip.  Thanks to hardworking developers and Google, I found Connectbot, and later VXConnectbot, and I loved feeling like I could do anything since I had access to Linux in my pocket.  But the more I thought about it, the more I wondered about the Linux that was literally IN my pocket: Android.  I knew it was based on the Linux kernel…so what about actual Linux?  The rooted nature of the phone lent itself to a plethora of options (some better than others) and soon I saw that little command prompt.  It was also at this point that I began to somewhat understand how VNC was working, and it was quite the exciting day when my phone was on my laptop screen, looking very much like a very limited laptop of it’s own.  Mind. Blown.

In parallel with my OS-inside-an-OS adventures, it seems that companies were exploring the plausibility of the same thing, and to my excitement, some were looking at simply unifying the OS across the board.  While it would be years before any of that would see a useful release, I still felt that there was tremendous opportunity in this area.

Moving on in the timeline, I found myself serving as a graduate student assistant in the Dean of Students office at Stony Brook.  We had not only switched to Google Apps for Education, but my job responsibilities coincided heavily with office software suites. At this point I sometimes wondered: outside of the odd job here or there, how much of my job could I do from my phone?  My sneaking suspicion was 100%, but there were definitely tradeoffs in speed and usability.  In a similar vein, our CIO at the time was doing a lot of experimenting with Chromebooks, and being in the mobile/cloud mindset at the time, I was completely behind the idea, well, mostly.  While I thought a Chromebook may be perfect for some people, I still felt I had some atypical needs (like an attachment to Linux and the fact that I was a CS major and needed to code).  In classic fashion, great developers and Linux to the rescue: Crouton!  While I wasn’t able to acquire a Chromebook at that time, you can bet that I was following them closely.  When I finally managed to get my hands on one, I think it spent all of a half hour not being able to run a chrooted Linux, which it does quite well.

Since the Chromebook, I feel like the majority of my ponderings have related to building or leveraging things on top of Linux rather than getting Linux on as many electronic devices as possible.  I did back the C.H.I.P. on Kickstarter, and carefully researched the Pi Zero, but a lot of time these days goes towards work and doing fun things with my girlfriend, who understandably doesn’t assign a high value to random hacking in the same way I don’t assign a high value to random shopping, but the itch is always there.

So when I was making a bootable USB drive to upgrade one of my computers and I wondered if I could use my phone as a bootable USB drive, which led to what if it could be ChromiumOS, which instead of being bootable, was actually running, which if the filesystems could be shared would be a seamless mobile/desktop experience in my pocket, which would be great if ChromiumOS could do more things…kind of like Linux…so can I get Linux on an Android again?

At this point I’m no longer running rooted, but I DO have a Galaxy Note Pro, which has, in my opinion, the best multi window experience on mobile to date.  That could make for quite the desktop-like experience indeed.  As an added bonus, I could potentially update my old post about running Emacs on Android, even if it was still a questionable endeavor.

I ran into a lot of the same results that I did years ago when hacking on my phone, with one noted exception: GNURoot.  What once seemed like a limited implementation had received quite the facelift, and met a very important requirement: no root required.  I did some checking, then downloaded the p-trace contained Debian Jessie app and launched away.  A short while later, I was met with that familiar little cursor (though not so little, it is a 12″ screen after all) and began exploring my little system.  True, it doesn’t have a terribly up to date kernel, but it does have access to my files on the tablet, and managed to apt-get not only Emacs, but nodejs and node-RED as well!  Naturally I used node-RED to tweet this success, and discovered that while I could get a node+express app running (if the files where within the container for npm install) and accessible from the Chrome app at http://localhost, nodemon seemed to be unable to refresh & restart the application (a pity really, not sure where the breakdown is occurring).  Despite that, I still consider the preliminary experience a resounding success for both ease of setup and exceeding initial expectations, and of course having multi window support.

Tweet of Node-RED

Successful Tweet!

Conclusion

I look forward to testing out VNC and/or an X system on the Galaxy Note Pro.  As far as systems go it’s got a formidable processor and amount of RAM, so there are lots of possibilities, especially when combined with this nice little 2-in-1 I discovered in college (originally for the Webtop mode of my old phone) which provides a nice little mouse cursor on the screen, for what limited support it gets offered. I do realize that there’s the Microsoft Surface, but where’s the fun in that?

I would like to close with a concession.  While it would appear that I’ll put Linux on just about anything, I have refrained from, and most likely will forever refrain, from hacking it onto my PS3.  While it would be cool, I hardly get time to play it anyway, so I wouldn’t want to finally have a couple days and power it on only to find out that my Assassin’s Creed is no longer welcome.  That aside, Linux is an incredibly powerful and flexible OS once you figure out things to do with it, and the ability to get it running on such a variety of devices can be a great encouragement for those looking to experiment and push the envelope on their technology a bit, happy hacking!

Tasker: Forwarding Texts and Creating Your Own Android Features

I’m about a year in from when I talked about my current phone, and I’m happy to say that things are going great.  Well, they’ve been going great, but as I write this it seems that Motorola may be making some poor choices as far as supporting phones goes, so we’ll see how things pan out.  But I digress, my Moto X has been working really well, and I’ve been really pleased with the additions from Motorola in Lollipop, and sincerely hope that this experience continues into Marshmallow.  However, despite the features added into Android and those provided by Motorola or the apps you download, there’s always personal tweaks and alterations that users may want.  Thanks to the open nature of Android, these are possible, and thanks to Tasker, it’s possible for all users, wherever you have your phone, even when you’re watching chefs play in a band at your girlfriend’s job.

The Idea

So the situation is this: I’ve got a “work phone.” That’s correct, I’m one of those obnoxious people that you can find carrying around multiple phones on the weekend, checking one, then the other, then pocketing one and putting the other one back on my belt.  And while that’s great and all, I don’t always want to be carrying both around…like when I’m at work and moving around from office to office, meeting to meeting.  It’s hard enough for me to try and remember a water bottle, never mind multiple phones. So what’s a nerd to do?  Normally, I end up leaving my personal phone at my desk when I head out on these little adventures, thereby missing any notifications that may occur in my absence.  Thanks to these wonderful things called apps, and “the cloud,” most of these will be mirrored to the work phone, with the noted exception being calls, and texts.  To stereotype, being a millenial means I don’t have to worry much about missed calls, which leaves texts, which of late I’ve missed quite a number of important ones, including a “you locked me out and I can’t get to work”…definitely room to improve here.  So it was while listening to some chefs cook up some classics, I saw a feature in Tasker that read “Send SMS,” and an idea was born: What if my phone could text my work phone and let me know I’m missing something?

The Task

I immediately created a simple task to send a text, ran it, and watched it appear on the other phone.  Then while poking around the settings, I realized Tasker could identify the sender, message body, and sending number.  Suddenly my simple notification was becoming more advanced.  With my phone forwarding all this information, I began to wonder if the converse was possible…could I get my phone to respond to messages as well?

I needed a way for my phone to be able to differentiate between me replying to a text, and someone else sending me a new one.  Luckily, I had Tasker Variables at my disposal.  I stored the work phone’s number in a variable, and could now compare a sender’s number against it.  If they matched, I knew it would be a reply from myself and if they were not the same, that would be a new message that would need to be forwarded.  This worked great with one notable exception: how would my phone know who to reply to in the event of multiple texts?  I needed a format of some sort.

Expanding the Use and Functionality

Delving deeper into Tasker, and with a little substantial help from Google, I discovered the numerous ways to manipulate variables.  This allowed me to segment and parse variables I already had, most importantly the body of a text message.  With this I was able to create and decode a sort of “message header” with the texts.  By using a format to specify a person, their number, and the associated message body, when receiving a new text I was now be able to know who it was from and what phone number they were using, and when replying, my phone would be able to determine what number to send the text to as well as what the message was.  I had been able to turn my phone from an endpoint, to a forwarding device, and finally to a middleman in about an hour of tinkering.  By subsequently revising my regular expression, I was able to account for some bugs such as newlines, and even get emojis working (an absolute MUST, I know).

With the chef band getting late into their second set I knew I didn’t have too much time left, but I did know that Tasker had some more to offer me.  Since it would behoove me to know when this Tasker profile was active, I added a persistent notification that tells me as much, and threw in the forwarding number for good measure.  I then made it actionable, so that I’d be able to disable it right from the Notification Shade.  I had now taken care of everything except turning this forwarding on.  Leveraging one of my favorite features of the Moto X (constant activation-phrase detection), I used the AutoVoice plugin to set the profile up to be activated when I tell my phone to “forward my texts,” at which point my phone will audibly respond letting me know that it is activated.  Unnecessary? Yes, but then again isn’t this whole process?

A Feature Complete

At this point the band was saying their thank yous, and the place was getting ready to close up for the night, but I had a working, voice-activated Text-Forwarder.  It was a little rough around the edges, but it was unmatched in terms of the ease of creation.  I even went so far as to combine this with my Google Voice (…err Hangouts?) number, and got myself a poor-man’s version of iMessage, not too shabby in my opinion.  Now with my increased knowledge and experience with it, I can’t wait to see what else comes to mind the next time I’m exploring the features of Tasker!

Tunnels, of the SSH kind

These days it seems like we all live in the the cloud.  Whether it’s your personal site, email, or photos to show your family at the next get-together, odds are you’re storing something, if not everything, up there.  It’s fantastic! But it’s not everything, and there’s plenty of times when you still need to be communicating with individual machines located anywhere from the same room to across the world.  For that, and so much more, there’s SSH.

Are you telling me to be quiet?

For anyone who says TL;DR to the Wikipedia article, SSH is one of the most simple and secure (if you’re doing it right) ways to communicate between networked devices. Clients are usually available by default on Unix based systems (Linux, Mac, and even the Chromebook I’m editing from) and can be easily obtained for the other popular OSes (Windows, Android, iOS). At the other end, SSH server software is a staple of most, if not all, servers out there, although it may not be enabled by default, so keep that in mind when you decide to tinker.

How I started using SSH

I stumbled upon SSH shortly after stumbling upon Linux.  Since then, it’s only become more valuable for both work and play as I learn additional features of SSH and how to leverage them.  Due to its secure nature, it can be used to encrypt traffic, transfer files, launch remote applications, and even help turn your tablet into a second screen!  And while there are plenty of apps and other ways to do these things, few have the same availability as SSH, especially when dealing with a University or campus network, and potentially even some ISPs.  But today I just want to focus on one aspect of SSH, and that’s tunneling.  There’s 3 kinds, two of which I’m familiar with, and one that has become incredibly useful as my development workflow has changed.

1. When you want it locally, but it’s remote

One thing I’ve found incredibly difficult about SSH tunnels is keeping the syntax and function of each kind straight in my head.  My main use for a Local tunnel is when the resources I need are not accessible to me locally, but I want them to be.  For example, you’re working on an application locally, complete with a local database instance.  You want to move to testing with a more populated database, maybe even your production one, but unfortunately it’s a pain to switch them in your code (at least until that refactor you have planned…).  This is where the “local” version can help, by creating a tunnel to the remote database.  Here’s the syntax:

ssh -L <local host>:<local port>:<remote host>:<remote port> <user>@<ssh server>

Taking a closer look at the command:

  • ssh is the command to execute SSH
  • -L signifies we want a “Local” tunnel
  • <local host> is the host of the computer you will be accessing the resources on, in my examples it’s always your own computer, so this will be localhost
  • <local port> is a port of your choosing to access the resources on, note that this can’t be just any port depending on your permissions, so it’s best to choose an ephemeral port for testing at first
  • <remote host> is the hostname/IP address of the remote resources you’re looking to get at
  • <remote port> is similarly the associated port of those remote resources
  • <user>@<ssh server> finally, this is the machine you have ssh access to, which must also have access to the remote resources you are targeting (note, they CAN be the same machine, and in my cases most often are)

So there’s quite a number of pieces here, so let’s tie it in with the example, say your remote database is at 192.168.1.5:3306, you also have ssh access to it with your user name ‘bob‘ and your application is set up to use a database at localhost:9876.  In this case, your command would be:

ssh -L localhost:9876:192.168.1.5:3306 bob@192.168.1.5

After a successful connection, any data sent to localhost:9876 will be sent through the tunnel to 192.168.1.5:3306, and vice versa.  Note that the remote IP and port are evaluated on the ssh server’s side, so since the ssh server and database server are one in the same, this is also a valid command:

ssh -L localhost:9876:localhost:3306 bob@192.168.1.5

which is where most of my confusion with this syntax comes from… Another great use case for this may help you to remember the nuance.  In this case the remote resource we want to get at is a website, say cnn.com, port 80.  Most likely you don’t have ssh access to cnn.com, so you’ll remember that you have to put cnn.com:80 for the remote information and the ssh server separately, as seen here:

ssh -L localhost:9987:cnn.com:80 bob@192.168.1.5

By navigating to http://localhost:9987, you should see the same information as http://cnn.com.  Note however, that with increased use of CDNs and distributed server environments, you may have trouble accessing certain sites if they see that there’s some discrepancy with the requests, YMMV.

2. I have it here, but I want it to be remote

Remote port forwarding is the killer accessory for my toolchain that I mentioned earlier.  And similar to its name, it does the opposite of local port forwarding.  In this case, you have a resource locally that you would like to make available remotely.  In my case, it’s usually a prototype of a webapp function that I’m messing around with.  I want to show it off to some people and get some feedback, but the team isn’t co-located, so walking around with my laptop is a no-go.  Instead, I can set up a remote port forward to a development machine, send them a link, and they can view it right on my machine.  This way neither of us need to mess with firewall settings, and there’s no need for any special deployments or the time that comes with them.  For remote forwarding, the syntax is as follows:

ssh -R <local host>:<local port>:<remote host>:<remote port> <user>@<ssh server>

As you can see, most of the syntax remains the same, except now there’s a -R to indicate remote port forwarding, the local fields designate the resource you’re accessing, and the remote fields are where others will access the resource.

An important difference here is that by default, the ssh server config doesn’t allow a user to “bind” to interfaces except the loopback address, so you can set everything up correctly and no one will be able to see it.  To get around this, you should specify GatewayPorts yes in the ssh server configuration and restart the ssh server program (my suggestion would be to set this up under a MatchUser section so that this functionality is only available to specified ids).

So once you’ve made that change, say you had a demo site you wanted to show to a client to get some feedback about color shades.  You have the site running locally at localhost:9080 and also have a server at colordemo.com, and you want to make the demo available on port 9090, your command would resemble the following:

ssh -R localhost:9080:colordemo.com:9090 bob@colordemo.com

Now your client can access the demo on your machine by going to colordemo.com:9090, and when they ask you to tweak a couple hex values, you can make those changes, ask them to refresh, and they’ll see it.  They can get instant feedback, and you can keep using your favorite editor to make changes, and no need to wait for a build/deploy.

3. A SOCKS Proxy

The final kind of port forwarding is a dynamic forward. This allows requests to be made by the client, but forwarded to the server who then resolves the requests, returning the information via the secure tunnel.  This allows for things like an impromptu SOCKS proxy, and admittedly I’m not sure what else.  I don’t have much experience with these, so rather than guess, I’ll leave it to you to search out a more informative search for those.

Final Thoughts

So that’s a not so short and sweet low-down on SSH port forwards, but hopefully some of the additional details helps you to understand and remember the syntax for each version.  If this sort of thing interests you, I really recommend getting acquainted with SSH in general, including how to utilize the “escape character” to kill frozen sessions, add in port forwards on the fly, and a couple other useful tricks.  Then there’s fun stuff you can do such as modify the message users see when they SSH into a machine (may I recommend including some ascii art?) which helps keep things fun when you’re in a mostly text world of the terminal.  And finally, if videos are more your thing, there’s a great video that details port forwarding and a WHOLE lot more about ssh called “The Black Magic Of SSH / SSH Can Do That?” which can be found on Vimeo. Thanks!

The “About Me” Update

So it’s been quite awhile since I’ve gotten around to posting anything here, which I blame partly on graduating and all the fun and change that happens with it.  Luckily, I’m able to keep this site and maintain it for the foreseeable future despite no longer attending Stony Brook as a student, not a bad deal =).

So What’s Up?

In December I graduated with my MS in Computer Science.  Both my BS and MS was no easy chore (often I felt that getting my BS was “BS”) but definitely a rewarding experience for me.  It’s been about 6 months since then and I can only hope that the lessons I’ve learned and the friends and acquaintances I’ve made stay with me as long as the memories will.

After graduation I accepted a full-time job with IBM, a company I couldn’t be more excited to work for, as a Product Engineer for the z Systems mainframe.  I had been interning in this area before, and was excited to be taking a more direct role in the day-to-day challenges and rewards of supporting such integral machines around the world.

Being in the “real world” has been both thrilling and terrifying, much like I thought it would be.  While I had been “responsible” for my finances for years, the game changed when the government was no longer helping me pay rent, and scholarships weren’t footing the dinner bill.  So while I’m slowly learning the ins and outs of organizing expenses, living within my means, and even doing a little bit of saving, I also enjoy the freedom of being able to make purchases and investments that weren’t possible until now.  Certainly not least among these is a new car (sadly a necessary purchase after my faithful 2000 Camry 5-speed took an early retirement) and fun little gadgets such as my Chromebook, new tablet (for work of course…), and some equipment I intend to tinker with to make a fun (and awesome) home entertainment setup.

So long story short, I’m onto the next phase, and loving *almost* every minute of it, but still enjoying fooling around with technology, trying new things, and hopefully continuing to learn each and every day.  Until next time…