Building a Status page for $5 per month

When we first built HR Partner, I wanted to have some sort of status page like most web apps do, to let our customers know about uptime availability and any scheduled maintenance that we had planned.

Our HR Partner status page at: status.hrpartner.io

Our HR Partner status page at: status.hrpartner.io

Looking at most of the commercially available offerings, I found that while excellent, they were quite expensive, when compared to the project management, accounting and bug tracking tools that we already subscribed to.  Being a relatively small, boot strapped startup, I didn't want to add to our already high monthly subscription burden too much at this stage.

Eventually, my search led me to Cachet, which is an open sourced version of a status page app, that seemed to have most of the features that the 'big boys' did.  End of the day, we managed to host Cachet on a virtual server for around $5 a month, and given that the cheapest commercial variant we found was $29 per month, I am happy that we got something working for a budget price that is hard to beat.

Given the buyout of one of the main commercial vendors StatusPage.io by Atlassian today, a lot of people have seen me post about my efforts and have emailed or PMd me to ask how we went about this, so this post will hopefully let you know the steps we took.

Hosting

Our main HR Partner web app is hosted by Amazon AWS, in their us-east-1 region.  Because we wanted some sort of redundancy in case of a major Amazon outage or regional catastrophe, we decided to host our status page on a Digital Ocean Droplet over on the West coast.  Different providers, different infrastructure, different areas.

So the first thing we did was to set up a VPS in Digital Ocean.  I picked the cheapest droplet they had, which was a $5 per month server running Ubuntu 14.04 (64 bit) with 512MB of RAM and 20GB of storage.  Cachet doesn't take much in the way of resources at all, so this was plenty for us.

The Stack

Once the Droplet was up and running, we just opened up a console to the server from within our DO control panel, and installed MySQL on it.  Digital Ocean have a great article on how to do this right here.  We simply followed the instructions step by step.

Next step was to follow the equally great instructions from the Cachet documentation right here to install Cachet on that VPS.

I believe the only tricky thing that we had to do was tweak the permissions within the Cachet folder.  I believe we had to chown the folder and all subfolders to the www-data user and group.

Configuring Cachet

Once we had Cachet installed as per above, we adjusted the .env file to use our preinstalled MySQL instance for the database, and also to use our normal Amazon SES service for the sending of emails.  I believe we had to also change the default queue driver for sending emails.  Here is what our config file looked like:

APP_ENV=production
APP_DEBUG=false
APP_URL=http://status.hrpartner.io
APP_KEY=***secret key here***

DB_DRIVER=mysql
DB_HOST=localhost
DB_DATABASE=cachet
DB_USERNAME=***yourdbusername***
DB_PASSWORD=***yourdbpassword***
DB_PORT=null

CACHE_DRIVER=apc
SESSION_DRIVER=APC
QUEUE_DRIVER=sync
CACHET_EMOJI=false

MAIL_DRIVER=smtp
MAIL_HOST=email-smtp.us-east-1.amazonaws.com
MAIL_PORT=25
MAIL_USERNAME=***yourSESuserIAM***
MAIL_PASSWORD=***yourSESkey***
MAIL_ADDRESS=status@hrpartner.io
MAIL_NAME="HR Partner Status"
MAIL_ENCRYPTION=tls

That was really about it!  (Oh, don't forget to let Amazon SES know about the email address that Cachet will be using to send emails as - in our case status@hrpartner.io.  Otherwise it won't pass the SES spam filtering).

Last thing was to tweak our Amazon Route 53 service to point status.hrpartner.io to our Digital Ocean VPS IP address.  Done!

Now it was all a matter of setting up Cachet with our components and needed to be reported on, and we were away.  All in all, I think the install and configuration took less than an hour to do.

BONUS: Auto update

Because HR Partner is a fairly complex app, with multiple sub apps for the API, reporting engine etc., deployment can take a while to do, and can result in slow performance for up to 15 minutes at a time while the virtual instances are updated and synchronised.

We use Amazon's Elastic Beanstalk command line tools to deploy changes, and at first our procedures meant that before we ran a deployment, we manually logged into our Cachet server to flag the services that would be down, then deployed, waited, and went back to Cachet to flag them 'green' again.

This was quite tedious, and I wondered if there was an automated way.  It turns out there is.  Cachet has a great JSON API, so what we did in our projects was to create a couple of files under the .ebextensions folder in our project folder.  These files contain the scripts that we wanted Elastic Beanstalk to run before and after deployment.  First, we created a file called 01_file.yml for the before script:

files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/02_cachetupdatestart.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":2}' http://status.hrpartner.io/api/v1/components/2
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":2}' http://status.hrpartner.io/api/v1/components/4
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":2}' http://status.hrpartner.io/api/v1/components/5
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":2}' http://status.hrpartner.io/api/v1/components/6
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":2}' http://status.hrpartner.io/api/v1/components/8

Then we created a 02_file.yml for the after script:

files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/02_cachetupdatefinish.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":1}' http://status.hrpartner.io/api/v1/components/2
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":1}' http://status.hrpartner.io/api/v1/components/4
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":1}' http://status.hrpartner.io/api/v1/components/5
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":1}' http://status.hrpartner.io/api/v1/components/6
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X PUT -d '{"status":1}' http://status.hrpartner.io/api/v1/components/8
curl -H "Content-Type: application/json;" -H "X-Cachet-Token: [secret token]" -X POST -d '{"value":1}' http://status.hrpartner.io/api/v1/metrics/1/points

(Replace the [secret token] above with your unique Cachet API token.)

Now whenever we do an eb deploy command, the relevant status page components are marked 'yellow' for the duration of the deployment, then brought back up to 'green' again when completed.

Cheap As Chips

The only running cost for our status page is the $5 per month for the Digital Ocean hosting.  That is all.  We've been running this configuration for some months now with good results.  When revenue and usage gets to the point where we need to update this, then we may look at some of the commercial offerings, but for now, this setup works well for us.

I hope we have managed to inspire others to try the same.  As always, if you have feedback or suggestion on how we can do this better, I would love to hear from you.

 

 

 

 

Revisiting keyboards and synth

I have posted many articles on here of my recordings with acoustic and electric guitar, but this month I wanted to go back to my earliest musical experience, which was playing the piano.

Like most kids my age growing up in Malaysia, I was forced into taking piano lessons from a very early age.  I had many teachers over the years, and some were really nice, but there were a couple of awful ones, especially one rather evil lady who used to rap me over the knuckles whenever I played a wrong note.  That experience, more than anything else, made me shun formal music studies and move away from the piano and on to the electronic organ and then eventually guitar.

This month though, I had the urge to dig out my old MIDI keyboard and make an effort at recording a keyboard rich track.  I have always been a fan of David Bowie, but I had never really done anything significant to commemorate his passing recently.  I went through a catalogue of his songs in my mind, but all of a sudden I remembered a song that I really loved that was not written by him, but was the soundtrack of a movie he was in.  The track is "Merry Christmas Mr. Lawrence" by the movie of the same name, starring Mr. Bowie.  It was written by Ryuichi Sakamoto.

I scoured the net and found some piano scores.  These were... challenging... to say the least.  I forgot about the depth of complexity to the piece.  Nevertheless, I gritted my teeth and dived in.  To disguise my poor playing, I decided to interpret the track as a 'techno' version of the original.

To warm up my fingers, I spent an hour or so recording this simple, yet charming piece by Erik Satie.

Then I spent the whole weekend putting together the main piece.  It was all recorded in Logic X on my iMac, using SampleTank for most of the sampled piano and instrument sounds.  I also used a bit of the Zebra synth from U-He.  Enjoy.

Recording acoustic guitar with a ribbon mic

My new recording setup

My new recording setup

Ok, I am finally getting back into recording my guitar playing, and this weekend past, I made a recording of my acoustic guitar with a ribbon microphone.

This was a replica of a Blumlien stereo ribbon microphone, and it is made by Nude Microphones.  I bought this particular mic late last year, but hadn't had the chance to use it until now.  One of the things holding me back was the fact that because this is a stereo ribbon mic, it takes up two channels on my audio interface.  I normally record with a mic and blend it with the signal from the internal guitar pickup, but that would mean I needed 3 inputs into my audio interface, and until I could upgrade my current 2 channel system to a 4 channel or greater, I kept putting it off.

Nevertheless, after attending a great music production workshop this week held by local artist Broadwing, where he espoused the benefits and technique of pure mic recording for an acoustic guitar, I decided to try the recording just with the ribbon mic.

IMG_6431.JPG

The basic setup is as the picture above and to the right.  I placed the ribbon mic upright on a short floor stand, and positioned it at the point where the guitar neck meets the body.  I found that I had to position the mic closer to the guitar than my usual condenser mics - probably around 10 to 15 cm in order to get the best signal.

Because the stereo imaging was pointing approximately 45 degrees towards the sound hole and the 9th fret from that location, I noticed that the right channel was significantly louder than the left.  Makes sense of course, as the right channel was pointing towards the soundboard where all the actual tone is generated, and the left channel was merely pointing towards my left hand on the neck.  I actually wanted to keep it that way so that the left channel picked up the fret noises and string squeaks as I moved around, while the right channel would pick up my right hand picking noises.  I simply boosted the input signal on my audio interface for the left channel until they matched.

The audio interface I was using was a Yamaha/Steinberg UR-22 that I 'borrowed' from my son.  Not my usual Apogee Duet because I now have a new iMac without Firewire.  I am on the lookout for a 4+ channel Thunderbolt audio interface.

As per usual, I recorded the track in Logic X, which has become my DAW of choice.  I simply set up two tracks - one for each side of the mic, and hit record.

This was also the first time I used Logic's multi take feature.  Normally I will do a single take and then manually 'punch in' any corrections over any mistakes I may (and usually do) make.  However, this time I did 3 consecutive ordinary takes, and used the 'sweep' method to pick the best bits of each take and comp them together into one decent take.

The way this works is that you will see the three takes all under each other, and as you are playing back, you can simply use the mouse cursor to 'sweep' an area on track 1, 2 or 3 in order to make that the 'active' block that is merged into the final track.  I could not believe how quick and easy this process was.  My old method was so tedious and resulted in many pops and clicks where I meshed the takes together badly, however this technique does a smooth fade in/fade out of the takes to eliminate all that.

Of course, you have to be absolutely spot on with the timing, and record everything to a metronome and stay on the beat for this to work.  At least it gave me a lot of practice in playing in perfect time!

Once I put together the three takes into one, I noted that the audio levels were still really low, though they had a nice character, so in post processing, I decided to use ONLY the Slate Digital plugins to tweak the EQ and add compression.  I used the Slate Virtual Mix Rack plugin on each track to EQ out some boomy bass and add some high end sparkle.  Then I used their Virtual Buss Compressor plugin to boost the volumes and even out the levels.  Finally, I used their Virtual Tape Machine plugin to add some good old tape warmth to the track.

The song here is "Growing Up" by Masaaki Kishibe.  I have really come to enjoy the pure melodic qualities of Kishibe's compositions, and intend to learn quite a few more of his songs in the future.

Here is the final result.  Hope you like it.

 

I must say I enjoyed recording on my new iMac - I had this one spec'd out with the 4GHz Core i7 processor and 3GB of RAM as well as an SSD drive.  It didn't miss a beat unlike my poor 8 year old ancient MacBook Pro.

For this recording, I used my beautiful Taylor BTO guitar, with a brand new set of Elixir Nanoweb strings on it.  The song is played with a capo on the second fret, and with the slightly shorter scale of this guitar, I think it gives it a nice bright sound.

Building a $20 "Prince" guitar

The past weekend was the ANZAC day long weekend, and seeing as I am a little burned out with programming work at the moment, I decided to take a little break from the keyboard and screen, and to tackle a project that I have been thinking about for years now - building a "cigar box" style guitar.

I've seen many people build these online, but never actually tried myself, so I looked around the house this weekend and decided that I had enough scrap material lying around to give it a go.

I don't actually have any cigar boxes lying around, but my wife did have an old art supply carry case that she no longer used, which was sitting in the back of the shed going mouldy, so she said I could have that.  Great.  I found a nice long piece of Merbau timber that was perfect for the neck.  80% there!  Collecting some old tuners from a dismantled Squier Strat, and cutting up some threaded rod and buying an ornate bracket, and we pretty much had all the parts for the guitar.  No excuses.

I posted about this build 'nearly live' on my Instagram account.  When I started posting, I had no idea whether the project would come to fruition or not, so I was taking a risk, but also, I was putting in place some accountability, because I knew I had an audience following along with me.

I also had no plans - just a rough idea of how to go about this from a blog post I had seen many months ago.  Never mind - I actually built a real acoustic guitar 3 years ago, so this couldn't be any more difficult, could it?

As it turns out, the process was fairly straightforward, and I managed to accomplish the build using rudimentary tools, and some very journeyman carpentry skills.  As you can see from the progress photos, I decided to put frets on the neck of this guitar, although that was a moot point, as I was going to set it up as a very high action slide guitar.

Once I had assembled the guitar proper (with some able assistance from my older son), I handed the project off to my wife, and asked her to paint anything she liked on it.

Given the current loss to the music world, she decided to paint a portrait of Prince on the guitar, and I think she did a fabulous job of it.

That was a really fun build, and kept most of the family occupied and creative, and we ended up with a great tribute to a superb artist that left us all too soon.

 

Picking Non Random Colours for the UI

I think we are up to Part 9 of our often interrupted feature posts on the building of our new human resources SaaS app HR Partner.  I've lost count a little bit, but today I wanted to talk about one of the design issues we came across when creating the dashboard.

We love using the little pie charts from chartJS to show the relative breakdowns of male/female employees, or distribution across departments and employment statuses.  The issue was, we didn't know how to best create a colour palette for the pie segments.  You see, our users can have anything from one to many dozens of pie slices, depending on their organisation and operating requirements.

For this reason, we didn't want to create a set number of colours in our palette, mainly in case our customers exceeded this limit.  We also didn't want to generate totally random segment colours each time the chart was generated because I believe that a part of a good UX is consistency, i.e. if a customer is used to seeing light blue for the department 'Finance', then seeing it as a dark red next time can throw them off.

Additionally, one of the big features of HR Partner is that HR consultants may work across completely different company entities on a day to day basis, and it would be nice if the Finance department in one company dashboard was the same colour as the Finance department in a totally separate company.

For that reason, we decided to set the segment colours based on the segment names.  So the name 'Finance' would generate the same colour on ANY company.

Our first efforts at this resulted in some quite garish colour choices which was not pleasing at all, so in the end I decided that we would try and restrict the colours to lighter pastel hues that wouldn't clash too much, but still be fairly easy to discern.

Secondly, I also realised that because our algorithm was only taking the first 6 characters of the name, there could be collisions with similar department or employment statuses (e.g. 'Part time' and 'Part time permanent' would result in the same colour).  I also wanted similar sounding names (like 'Finance' and 'Final') to generate colours that were not too similar to each other, so I decided to do a simple MD5 hash on the name to generate a semi unique hash upon which to generate the colour.  

Here is the Ruby helper method that we use to create the colour for the view.  It simply takes a text seed string, and generates a CSS hexadecimal code for the colour.

def get_pastel_colour(seed)
  # Generate a pleasing pastel colour from a fixed string seed
  colrstr = Digest::MD5.hexdigest(seed)[0..5]
  red = ((colrstr[0..1].to_i(16).to_f / 255) * 127).to_i + 127
  green = ((colrstr[2..3].to_i(16).to_f / 255) * 127).to_i + 127
  blue =((colrstr[4..5].to_i(16).to_f / 255) * 127).to_i + 127
  "#" + "%02X" % red + "%02X" % green + "%02X" % blue
end

 

I think it ended up quite pleasing to the eye. We ended up using the same code to generate the colour within the calendars too, to get consistency with respect to leave categories.

 

I'd love to hear from other developers on how to improve on this so the colours can be a little brighter and stand out from each other a little more.

Disclaimer: Not saying we were the first to ever 'invent' this method, but there wasn't a lot that I could find on Google, so I thought I would post here in the hopes that it might help someone else who needed something similar.  The code above is based on something I found on StackOverflow, but I cannot find it again now to post proper attribution.

 

Back to recording again

Last month I had a major reorganisation in my home office/studio.  I moved my MacBook Pro to the downstair office and swapped my Windows PC to my upstairs alcove studio.  I had always used my MacBook as my primary recording platform, but the upstair studio was becoming too hot and noisy and we had just installed a brand new air conditioner in the downstairs office that I wanted to take advantage of.

Over on the left for work, over on the right for play!

Over on the left for work, over on the right for play!

So this is the first recording in the new space, and I like to say that it was MUCH more enjoyable in the cool and (relative) quiet compared to the old space.  Still need to do some work on reducing reflections etc., but overall, I think it is positive.

I still need to bring my KRK studio monitors and set them up downstairs, so at the moment I am doing all mixing and mastering using my Sennheiser HD 25-SPII headphones, which is not ideal, but all I have to work with at the moment.

My fancy stereo ribbon mic still hasn't been used in anger yet - not at least until I get a 4 channel audio interface, so I used my trusty Rode NT1-A mic blended with the internal AP5 pickup in my venerable old Maton guitar.

This piece is called 'Dandelion' and is by Masaaki Kishibe.  I've actually been playing it for a couple of years now, and it turns out to be my wife's favourite of all the instrumental pieces I play.  It is a fairly simple song, but to capture that lilting feel is a bit tricky.  I don't think I have mastered it yet, but will keep working on it.  It doesn't help that I haven't played fingerstyle guitar for so long that my fingers are still not as nimble as I would like.

I mastered this track using the Slate Virtual Mix Rack plugins - nice, but a bit of a drain on the resources on my 7 year old MacBook.  I am not completely happy with it as I think the final results are still to strident.  I need to reduce some of the high frequency and bring in more bass without making it too boomy or woofy.  It is all a learning process, and I think once I have my KRK monitors set up for mastering work, I can improve on it.

 

Who exactly is 'excited' by your latest release?

I've seen it on tons of blogs, tweets and posts... "XYZ is so excited to announce the release of our new feature on our app...".  Heck, I've done the same with my own apps too, so I am just as guilty as anyone else.

But lets face it.  It is usually only the authors, designers and developers that get excited.  And why not? We spent hours/days/weeks building code from scratch, overcoming seemingly unsurmountable technical problems, tweaking, perfecting and polishing.  Of course we will be as excited as new parents are, to release our baby into the wild and get some validation for all that effort.

But consider the user's perspective.  Is 'excited' really the right word for them?  We actually asked a select group of our users over time, and I think the more apt emotion would be 'interested', followed closely by 'apprehensive', or 'doubt'.

You see - as the builder, you have already envisaged what your new features will be used for.  What they can achieve.  How they can be used for the betterment of humankind.  And that is great. You have it all road mapped in your mind.

But most of this takes place behind closed doors, with little or no buy in by the user.  Which is probably as it should be, if you want to focus on maximising your effort and minimising distractions.  After all, no one wants a horse designed by a committee.

What your users eventually see is something new that they have to learn.  Possibly it might be a thing that they could use, but then they wonder if they have the time to learn it and make it fit with their day to day operations.  Will they have to change the way they do things in order to make best use of it?

So perhaps we need to change the way we word new product or new feature announcements.  I certainly intend to do so moving forward.  What would be the best word choice?  Is 'proud' a better way to announce something new?  How do we get 'buy in' from the user to that they feel like they are a part of the journey, rather than just some surprised passer by who has to reel back when developers jump out of dark doorways and say "Boo, I'm so excited about this..."?

I look forward to your thoughts and ideas.

 

Errors don't have to be boring

This is part 7 in the chronicles of building HR Partner, our latest web app.

A short one today.  I was designing the 404 and 500 error screens for our Ruby framework, and decided to go outside the box a little.

Usually, the 404 error page is a fairly boring affair, telling the user that they have tried to load an invalid page.  I thought to make it more interesting, I would incorporate an ever changing background for the error pages.

I am using a dynamic CSS background for the error pages, which links to unsplash.it to load up a random grayscale image as the background.

This way, every time that a user hits an error page, they will still get the large '404' or '500' error number, but overlaid on a different background each time.  I have no control over what image gets shown, but I find myself just hitting invalid pages every now and then during my development routine - just to see what pretty landscapes show up.

The body style tag looks something like the following:

<body class='black-bg' style='height: 100%; background:url(https://unsplash.it/g/1000/800/?random) no-repeat fixed center center;'>
  ... rest of 404 error info
</body>

So as I said - error message certainly do not have to be boring!

How I rolled my own explainer video, in a weekend, for under $100

Being totally boot strapped, and non funded, I have to market my web app HR Partner, on the smell of an oily rag, plus do all the marketing and other promotional tasks to keep the costs down.

I’ve been told many times that I basically have to have an ‘explainer video’ to introduce people to my app, because it is the quickest and most effective way to get people interested and signing up.

Well, I hunted around and spoke to several companies that specialises in making these explainer videos. I gave them my specifications, and received back quotes ranging from $2000 up to $6000 to make a 60 to 120 second video.

I debated going to 99designs.com or fiverr.com, but in the end, decided against it because every time I began a conversation on those platforms, I always felt that the price wasn’t as concrete as the other firms I spoke to. It was always along the lines of “Well, we have a starting price of $x, but if you need this, then it will be $y extra, and if you wanted that, it will be $z more…” etc.

So I thought I would throw caution to the wind and look at doing the video myself, over the past weekend. I started on Saturday morning.

The first thing I did was to go to Envato, where I have an account, and search on their VideoHive sub site for an Explainer video template. I found one there for around $40 which I quite liked. Then, I went across to theirAudioJungle sub site to find a background ambient music track to suit the video. Found one. Total time searching and evaluating on Envato was around 2 hours.

Next issue was that the explainer template required Adobe After Effects to modify, so I signed up for a one month subscription for JUST After Effects on the Adobe Creative Cloud — total cost, approx. $20.

I had never used After Effects before, so while the app was downloading, I viewed a couple of 30 minute introduction and tutorial videos on Youtube. It didn’t seem too hard. I figured that I had managed to self learn other Adobe products before, and with my development background, I felt confident I could get to grips with it.

Once installed, I spent the better part of Sunday afternoon tweaking and customising the AE template, and wrote up a short script. Well, I thought it was short, but it ended up being around 3 minutes long.

Then came time to do the voiceover. I hate the sound of my own voice, but luckily my wife has a really nice speaking voice (she has actually been asked to be a voiceover artist on a few occasions). So she did the voiceover for me. One take, 5 minutes, and we were done.

I guess the other good part is that I am a musician as well, so I have some fairly good quality studio equipment which ensured that the recording sounded decent. I did some post processing in Logic using compressor and reverb plugins to tidy up the audio, and mix in the backing music I had grabbed from Audio Jungle.

I managed to complete the post processing on Sunday night, and uploaded to Vimeo on Monday morning, ready to embed the video on my website, which I will do later today after I have a break.

I think I spent a total of around 10 hours of my own time over the weekend doing the editing and audio post processing. After Effects turned out to be fairly simple to learn and use in the end.

So, my total costs (approx) were:

  • VideoHive explainer template — $40
  • AudioJungle backing track — $20
  • Adobe After Effects subscription — $20 (one month)
  • Voiceover artist — $0 (thanks to wifey)
  • Audio production — $0
  • Template customisation — $0

GOOD BITS: Having a voice over artist ready to hand. This is an important part of a video, and I can appreciate it is difficult to find a voice that fits. I consider myself lucky. Also, the explainer templates on Envato were REALLY good. Better than I expected.

TEDIOUS BITS: Learning After Effects from scratch. But the hardest bit was syncing up the explainer animations to the voiceover. I came close, but had to do a fair bit of chop and checking on both the explainer template and the audio file to get things to line up.

No disrespect at all to the companies who charge the prices they do for the production of these videos. It is definitely a taxing process, and my efforts are going to be very amateurish compared to theirs. If I had the funding available, I would have definitely engaged one of them to do this for me, but in this case, I had to work within my means.

Final results on Vimeo here: https://vimeo.com/157981359


Dogfooding 101

dogfooding (computing - informal) - to use a product or service developed by that company so as to test it before it is made available to customers.

We have now got to the stage where we have launched our latest web app, HR Partner.  Though we have reached launch stage, there are still a few things about the development of it that I'd like to share with you.

One of the things that we deemed was important to have from the outset, was an API (Application Programming Interface) that our users could use to query their employee data in HR Partner and integrate it with other systems.  This was slated as a 'Phase 3' project, which was to be commenced well after initial launch and we had a solid base of customers on board already.

However, one of the other things that we wanted to have in HR Partner was an extremely flexible reporting system.  Basically, we wanted our users to be able to query their data and filter (and sort) the data by any database column, including custom fields that the user can create within HR Partner.

When designing the architecture of the reporting engine, we realised that it would have to be quite complex, with a ton of checks and meta programming to enable the user to specify just about any query they wanted across the main employee file, plus the related lookup files.

We realised that we would essentially be duplicating the 'engine' for the reporting side, along with the engine that would drive the API later.  So we decided, why not kill two birds with one stone here - and we temporarily shelved the reporting engine development to sit down and build version 1 of our API engine.

Version 1 was basically purely a 'read only' API engine that allowed us to query the database tables and return the results as a JSON data stream.

THEN, we went ahead and started building the front end for the reporting engine, which directly used our API engine to pull and sort the data we needed for the reports.  All transparent, and invisible to the end user.

As you can imagine, this was a major change to our development timelines, but at the end of the day, it actually saved us time later on down the track.  The bonus is that we get to see our API being used under real world stress conditions.  We still haven't released the API specs publicly, but plan to do so in the coming months once we have completed stress testing and built the read/write components.

Building the API first also allowed us to start writing other apps for integrating various legacy payroll systems to HR Partner.  One of the payroll systems we support is Attache, which is a 30 year old Windows based system that uses the old Microsoft ODBC method to extract data.  We have designed a 'gateway' Windows app which uses a combination of ODBC and JSON API to pull data from Attache and upload to HR Partner in the background.

We used the Padrino framework to build HR Partner, which is based around Ruby/Sinatra.  Padrino allows us to mount separate apps within the same server easily, so we essentially have one app for the main HR Partner app, and another separately loaded app for the API, which allows us to still host on one AWS Elastic Beanstalk instance, yet be able to separately upgrade and take apps online/offline.

I am glad we made the design decision to shift our build targets around and get the API built first.  I can appreciate now what a lot of other startups are trying to do, by building the API, then designing their app around the API.  It makes for a far more robust and solid system.