November 15, 2009

Easing Operating System Installs

For many years I have been a fan of periodically wiping my computer and starting clean. This process originated with my first computer that could get on the internet. I began downloading all kinds of stuff from games, to utilities, to OS enhancements and with each new installed program the system would crawl a bit more.

At this time in the mid-90's company's were putting out products left and right promising to speed up your PC, make your applications work better, and return valuable disk space. I tried them all and they all had one thing in common they were useless, for every CPU cycle you got back two were wasted by the program, for every bit returned two were used, and so on.

Then there were mas the malware, notable pieces such as Michelangelo Virus and Melissa Worm came out in the mid to late 90's. With these notable pieces of malware came the media hype and exposés on hackers and their work. People became frantic about virus protection and internet security and products like Norton and McAffee became household names, and required installs taking even more of that precious hard drive space and CPU clocks.

I finally became fed up with all the smoke and mirrors and snake oil and started to reinstall the OS and applications. I tried as best I could to make the process as painless as possible and have all my applications backed up before doing a reinstall so I could start installing them immediately. This cut the time down after the install but I still had to locate them on my computer or the latest version on the internet and back them up to CD.

Throughout this time people had advocated making recoverable images of the system using programs like Acronis True Image or Norton Ghost. I have always thought this was a silly, pointless endeavor. First it takes nearly as long to reinstall the OS as it does the image. Second OS's are constantly getting updates and that is the first thing I want installed, I do not want an application interfering with an OS update or opening a hole before I have time to patch it. Third applications are constantly being updated and new ones coming out the whole reason for the reinstall is to be clean and fresh now I have the old and new version of every app installed and I just wasted all the time I was supposedly saving by restoring from image.

That process remained a yearly event around New Years continuing through Windows XP. When the first year of XP rolled around I did not even think about reinstalling everything still worked pretty well. For the first time in almost ten years I found I was upgrading my PC more frequently than my OS. When I did a major upgrade I still reinstalled everything just to be on the safe side. This still involved backing up some applications and finding others, with the internet became much more accessible and searchable it became easier but never painless.

With Windows 7 coming out I thought I was in for the same adventure. Then I found a new tool called Ninite which helps ease the application install process. What Ninite does is allow you to select from a fairly large base of applications then it will build an installer and install all those applications for you at once. The installer checks whether you are running a 32 bit or 64 bit OS and decides on the optimal version, during the installation process it will choose all the defaults and select not to install any browser toolbars, aware, or spyware. This site saved me so much pain and time during my Windows 7 install, it did not eliminate the pain but it sure helped out.

Thank you Ninite!

Cleaning and Organizing

You may have noticed recently that some old posts have changed or your feed reader has reposted some articles as new, all these centered around the goals postings. I subscribed to the feed and realized that the goals posts really obscure the meaty posts people probably came to read. I reorganized the site into two blogs, moved the goals posts over to the new blog then cleaned up some of the articles that referred to the goals posts. Right here at Caffeinated Code will still be where I post the topic entries and Caffeinated Goals will have the goals entries.

October 22, 2009

Backing Up Is Hard To Do

I'm sure everyone has a story of losing valuable data, not the stolen kind the destroyed kind.

There are so many ways things can go wrong, drive motors fail, platters become unbalanced, chips go bad, electronic noise, communication failures, programs go awry, viruses, the list goes on... And on... And on... I can relate stories of each type however, I have never found any of those listed to be as bad as letting a human near a computer.

I have done many stupid things to my data, deleted the wrong folder, formatted the wrong drive, thought I backed things up, not saved often enough, etc. The worst thing I have ever done is on a shared web host I did "rm -rf /" instead of "rm -rf ./" and I as a lowly user had permissions to do it (for all the non *nix'ers out there that is a remove all files and folders recursively from the root of the file system which can include all the disks and remotely mounted file systems, without prompting). Stories like this come from everywhere friends, relatives, or even giant companies like T-Mobile and Microsoft with the user data loss on the T-Mobile SideKick because of server failure on the Microsoft/Danger cloud.

So this gets me to the meat of the story, I'm a bit of a bit freak (haha yeah I know soul crushing pun). The recent story about the SideKick data loss got me thinking about more robust backup solutions. So I first thought about what questions need to be answered in order to decide what approach is best suited for the needs. Here are the 5 most important items I could come up with:
  • Is user error a concern? - Unintentionally deleted a file/directory
  • Do old versions and/or copies need to be kept? - Overwrote important calculations
  • Is timeliness a factor? - The contract only allows an hour of down time a year
  • How many people and devices are there? - One computer, a small office with a few computers, or a large number
  • Is security a concern? - Do certain people only need certain pieces of data, or is it ok for everyone to see everything
User error is the easiest to solve as all it requires is making frequent backups, in doing so the likely hood or complete data loss is reduced to near zero and data is recoverable to some relatively recent time frame. There is software built right in to most operating systems and then there s software like Acronis True Image® which will make bootable images of a system making recovery throw in a DVD, reboot and say go.

Versioning can be solved using a version control system these are often used by programmers to manage the source code so many developers can work on it at the same time systems like Subversion,Git and Mecurial exist for this purpose. This makes the process slightly more complicated as it is no longer just a file system to deal with. The first new piece is having a central repository which is similar to a file server just a bit more advanced. Users then ask the repository for a local copy (aka check out) which they would then use to view and make changes, once they are done making changes they need to send their changes to the repository (aka check in) so others have them available. Once a system like this is in place it will maintain each check in of a file as a separate version thus old versions can be looked at and used. These systems can grow to be very large because of all the copies of data so keep that in mind if more than text files are planned to be placed within.

Timeliness can only be solved through having spares of everything on hand or better yet a replica hot spare system on hand and ready to go. RAID sounds good at first but is is a pretty poor solution, if power goes out the write fails, if corrupt data is written its still corrupt, plus RAID cards are expensive and you NEED to always have a spare on hand.

The number of users affect whether a server is dictated, if only one computer is used it is senseless to go any further than the a drive or folder on that system. A small number of computers may indicate a basic NAS is a good inexpensive solution, but often they lack the sophisticated controls of a full fledged server. So on a larger scale where additional services (i.e. FTP, windows file sharing, SSH, Web Server, etc) are needed, or access restrictions a server may be the best choice.

The last issue of security falls with the answer to the number of users, a NAS may have some access controls to prevent users from getting data they shouldn't but a server running Windows, Linux, or Unix will have the controls.

In my case I am implementing a solution for my home, my concerns are user error, vesrioning and the number of devices (we have several computers, media servers, and other connected devices). My solution is to set up a server to deal with the number of devices, running Git to version my documents and code, and a NAS with a large amount of storage for automatic backups.

Be safe - protect your bits ;-P

October 13, 2009

Now to Caffeinated Code

Several times I have ventured into blogging only to fall vastly behind and effectively stop posting. Well I wanted to bring forward my past articles into this new blog, start fresh and clean.

My plan for the blog is to start out on a monthly basis posting a full article, longer term this will transition to a more frequent basis hopefully to the point of multiple posts each week. I have never been an avid writer so this is also an exercise at improvement in that regard and with that a lesson on what works for me, so far that is mostly free-form with a bit of internal mind mapping mixed in for good measure. One of the toughest things for me to get over is pressing the publish button, is the grammar OK, is it structurally sound, blah, blah. I really have to work on that one.

Update 11/5/09 - I removed much of this post as it had to do with my goals, to keep things clean that is now a separate blog at goals.caffeinated-code.com

August 9, 2008

Eclipse on Solaris 10 x86

The project I work on started, several years ago. In its initial stages or design goals were platform independence, robustness, and reasonably speedy. To try to achieve a balance of all these the Java platform was decided as our best option. After the project got rolling another package was envisioned to use the framework we had developed and allowed us to continue development beyond the original goals. With this new project came the need for GUI’s, which is where I come in, my primary roll on this project is as a GUI developer. Developing for Java at the time gave us two prominent frameworks for our UI. The first option was Sun’s Java Swing which has great cross platform support, an impressive feature list, had lots of development action. Swing had a few downfalls, at the time it was too unresponsive, lacked the native platform look and feel, and a bit cumbersome. While these issues have greatly improved, at the time this meant Swing struck out (yes the terrible pun was intended) so we decided on SWT which is a native GUI framework. A native framework means it executes code built specifically for that platform, calling the OS functions for doing much of the graphics work. Code that is compiled for an individual platform has several advantages most importantly being fast it also the same look and feel as other applications built for that system.

Working with SWT and the Eclipse RCP (Rich Client Platform) has generally been a pleasant experience. It however is nowhere near perfect, there are extra dependencies, lack of features, and a larger learning curve, when the platform has a problem it manages to make you feel like it is laughing in your face or poking you in a wound repeatedly. The way the platform was structured was basically least common denominator; the set of features is the subset of what all the platforms support. Well, this is how they describe it, sometimes a feature just does not work on a platform or it works unexpectedly. One such example is the combo box, there are options for Simple and Drop Down, in Windows these are two different things in Linux they are the same. So you say “It fails gracefully and uses what it can.”, this is not the case with printing, on every platform printing is fairly straight forward you set up your printer dialog, user selects a printer, you give the printer what needs to be printed and you are done. Printing on Solaris x86 isn’t quite the same, it just does not work

My biggest issue with SWT is its lack of builds for Solaris. When Eclipse 3.2 was released there was an Early Access release for Solaris 10 x86, after that it disappeared. Eclipse has taken the stance that it is Sun’s job to build and release the binaries for their machines, and sun is not doing it. This sounds like the Mac Java 6 issue (it has been beaten with a stick but I will hit on it and update this link when I do), where one side makes it sound like it’s the other person’s problem. I do not see things this way if it is your product and it has a problem in the production line, make it YOUR product’s problem and fix it as if it were your problem. Travelocity had this issue where hotels were losing reservations that had been passed on to them, it is clearly the hotels problem however Travelocity took it upon themselves to hire a company to call hotels and check that hotels had the reservations before people arrived. (Origional NY Times Article 37 Signals Post) If Sun is not willing to help make the product successful (and who can blame them, SWT, and its parent IBM is a competitor to them) make it the Eclipse Foundations Problem by some Sun boxes and build your product on them, do not leave it up to people like Blastwave.org even if they are willing to pick up the slack who knows what is missed here and there and what problems there are. While I commend them for getting Eclipse built for Solaris x86 and Sparc it is not their job.

Java was built on the ideal that write once run anywhere and SWT is not living up to that ideal. Eclipse needs to step up to the plate, they need to complete their product and make it truly cross platform. Kudos to Travelocity for going the extra mile and until the Eclipse Project learns to follow in their shoes they are doing a disservice to the Java community and programming in general.

July 1, 2008

Protecting Your Web Browsing

I am a bit of a privacy fanatic. I am that person; you know the one who…
  • Checks the boxes in the privacy statements to prevent companies from selling my information to their affiliates
  • Proactively establishes fraud alerts on their credit profile, requiring extra identification when applying for credit
  • Maintains separate “secret” e-mail addresses for my financial and other sensitive correspondence (on a server I own and operate)
  • Uses passwords so long it feels like it takes a week to type them in
  • Encrypts their private data, and only takes the private data that is imminently necessary
  • I use a separate browser just for sensitive internet usage.
  • As well as so many other things
I do everything within my power to maintain my privacy except where I expressly want to give it up.
Have you ever heard the question, “If you have nothing to hide why are you hiding everything?” My answer is choice. If I choose to give up my information than it is voluntary. Additionally, the information I decide to give, when I give it, how I give it, whom I give it to, and why I give it are all my choice. If I do not take the time to decide these things, I am leaving someone else to decide how they use my information. No matter who it is, they do not have my best interests in mind when they utilize my information; they are only considering their interests. I chose to take on this arduous task, however I could accept the risks and allow others to handle MY private information how they see fit and divulge it in managers they believe reach their standards of security.
Are you wondering where I am going with this yet? My girlfriend and I are enjoying a week and a half long relaxing vacation in our nation’s capitol, Washington, D.C. We are staying in a beautiful hotel just outside of downtown D.C. In the lobby of the hotel, they provide free wireless internet (it costs $9.95 a day for internet in your room, I am cheep and the venture to the lobby is okay with me). Free wireless internet also typically comes with the *snicker* high security of open Wi-Fi. I am a proponent of open Wi-Fi in my home, which is a discussion I will follow up on in another post, in the setting of a public widely used access point I am not that comfortable. Open Wi-Fi offers no encryption of my traffic: instant messages, calendars, documents, e-mails, passwords, and the list continues. While someone eavesdropping on my internet activities worries me, I am more worried about the hotel I am staying at collecting that browsing information. Every bit of traffic sent through their wireless router, is subject to their security and their procedures. Who knows if there is a proxy server in there capturing all my traffic, logging whom I am, where I went, for how long, and countless other pieces of information, all before sending me to my destination.
This brings me to my point how do you protect yourself. My choice is using SSH, which stands for secure shell creating a heavily encrypted channel between my computer and my server. Once I am logged on to  ­my server, I establish a “tunnel” simply a port on my local machine that takes all the traffic generated on my machine and sends it over that encrypted channel to my sever which then sends it out to the Internet. This is the simplest technique to secure your communications while in an unknown or un-trusted internet environment. This technique is easy to set up and requires little experience.
Setting up an SSH server is a bit outside the scope of this entry but here are some useful links
Setting up an SSH client in Windows (PuTTY)
  1. Download PuTTY (sorry if anyone like some other program, PuTTY is easy, and used by myself and everyone I know)
  2. Open PuTTY, which brings you to the PuTTY Configuration dialog.
  3. On the left of the dialog there is an expander for SSH (under the Connection tree), expand it to show the Tunnels configuration.
  4. Enter into the Source port the port you wish to use on your local machine (I know the low 5000’s are empty, web page traffic is typically port 80 so I usually use 5080), leave the Destination box empty, change the next line from Local to Dynamic.
  5. Click the Add button next to Source Port, D5080 should appear in the forwarded ports section (where 5080 is the port number you entered in the Source port box)
  6. Go back to the session tab (the first page that came up when PuTTY opened), enter the address of the machine you wish to connect to in the Host Name box, in my case, I have SSH setup on the domain hosting my blog t3hph33r.com. I also am going to save these settings so I can re use them in the future by entering a name in the saved sessions box, then clicking save. When I want to bring back my settings I click on the load button and everything will be filled in.
  7. Click open to connect to the server (if this is the first time you may be prompted with a message asking you to verify the SSH key, this should only happen the first time you connect, if it happens again your server may have been compromised). This will give you a prompt asking you for your login and password on the SSH server. Once you enter your login and password, you are connected and your private connection is established.
  8. The final step is to set up your browser (found below)
Setting up an SSH client w/forwarding in *nix
  1. If you are running any common distribution of *nix you most likely have SSH already installed but if its not use your distributions package manager to retreive it (usually called OpenSSH sometimes just SSH)
  2. Open a terminal
  3. Type the following command in at the prompt ssh -D 5080 user@host
    1. User is your user name on that machine
    2. Host is the machine’s address (this can be an IP such as 123.456.78.90 or a domain name such as caffeinated-code.com)
  4. The final step is to set up your browser (found below)
Setting up your Browser
These instructions are for the latest versions of major browsers as of this writing so older versions may have different nomenclature. Older browsers are usually large security risks, you should consider upgrading to the latest version of your preferred browser
  • Google Chrome:
    1. Go to the Customize and control Google Chrome menu then to Options
    2. Click Under the Hood
    3. Scroll down to Network
    4. Click Change proxy settings
    5. In the LAN Settings dialog select ‘Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections)
    6. Click Advanced in the Socks box enter localhost after the : enter 5080 (where 5080 is the port you selected in PuTTY)
  • Firefox 3:
    1. Go to the Tools menu then to Options
    2. Click Advanced
    3. Click the Network tab
    4. In the Connection area click Settings-this should bring up the connection settings dialog
    5. Select Manual proxy configuration.
    6. Leave all the boxes empty except the SOCKS boxes under host type localhost and in the port box enter 5080 (or the port you chose in PuTTY)
    7. Click OK or Accept on all the dialogs
  • Internet Explorer 7
    1. Go to the Tools menu then Options
    2. Select the Connections tab
    3. Click the LAN Settings button near the bottom
    4. In the LAN Settings dialog select ‘Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections)
    5. Click Advanced in the Socks box enter localhost after the : enter 5080 (where 5080 is the port you selected in PuTTY)
  • Opera 9.5:
    1. Opera sadly does not support SOCKS proxy but there is a work around that is explained by this blog
This should serve you on your next business trip or vacation protecting your browsing, information, identity, and security.

June 16, 2008

Welcome to Caffeinated Code

Welcome to Caffeinated Code!
It has been my desire to start a blog documenting my experience, questions, and answers in relation to information technology. I think blogging is one of the best modes to accomplish this. Everyone knows the internet is full of information; sorting and locating it is difficult. There is a lot of cognitive friction and wasted time involved in searching for that information. A blog is a great way to glue all those pieces together in one place.
I am a GUI and application developer, working primarily with Java 6 and SWT. I have experience with PERL, Ruby, C/C++, Assembly, embedded C, hardware development… Thus Caffeinated Code is intended to revolve around programming, design, and human machine interaction. I am known for my wild tangents so be prepared for thin connections. In the interest of full disclosure my goals here are for my benefit, I want a place where I can document information I find useful and use it later. Providing this information in a public forum is a two way street. I hope to extend benefit to those of you looking for similar information, where a question is laid out and people can add from their experiences. Comments do a great job linking information together as well as provides insight into that information; as Jeff Atwood stated in A Blog Without Comments Is Not a Blog:
Personally, I’ve found that the comments can be the best, most informative part of a blog. Anyone who has visited Amazon and skipped directly to the user reviews will know exactly what I’m talking about.
Hopefully the tidbits of information, observations, questions, answers, comments, and posts begin to provide a place where the community can learn, find, and communicate.