The Disconnectivity of Remote Working

 Photo by  trail  on  Unsplash

Photo by trail on Unsplash

Throughout the 30+ years of running my own business, I have explored all aspects of teamwork.  From having my own in house team, to having a totally remote team, to a combined mix of the two.

Which do I prefer? Now THAT is an interesting question.

I would consider myself an introvert, and I do prefer working by myself in my own home office a lot of the time.  However, some of my best working memories have been when I have been in an office situation and working alongside others.

There is something about the human connection of being in the same space as others.  A myriad of non verbal cues and communication that goes on, most at a sub conscious level, which lends itself to a better sense of being part of a community which is pulling in the same direction.

Case in point - my current startup is a fully remote setup.  For the past two years, it was really only myself and another co-founder, who worked in a small town literally on the other side of the world.

Now, my co-founder and I had a great working relationship, and we produced a ton of stuff together.  All communication was mainly via Slack and email, and we used to talk on a daily basis PLUS have a weekly web video catch up.

My co-founder left the startup about 2 months ago.  The first week was really challenging, as I directly missed talking to someone while working away on new ideas.

But by the end of the first month, I started to get used to working by myself again.  After all, I had run the startup by myself for about a year before my co-founder joined me.  So it felt basically the same as it did before.

By the end of the second month, I was actually struggling to recall even working with my former co-founder.  This concerned me, as I always considered myself a sensitive person who liked to reminisce about happy memories.  So why was it suddenly so difficult for me to recall any of those good times we had had?  My co-founder's departure was amicable, so this wasn't as a result of any ill feelings.  Rather it just seemed that those experiences and memories were just floating out of reach, and without anything to anchor them too, they just seemed to waft away whenever I tried to recall them.

Even when I would go back through a Slack conversation to find an old screenshot or idea, I would re-read some of our conversations - but I struggled to actually remember the emotions or personality behind those chats.  Re-reading them seemed somehow cold and impersonal and I couldn't tell if I was tired, or angry, excited or happy while typing those paragraphs.

As a direct contrast to that, I can still clearly recall events that happened in my office over 20 years ago when I worked only feet away from the rest of my team.

Tiny things like a shared look, collapsing on the floor laughing at an 'in house' joke, or the casual punch on the shoulder as someone congratulated you while walking past your desk - all those things just added so much to my working experience that I, even as a self confessed 'lone wolf', missed them terribly.

There is something about being around people who are experiencing the highs and lows of their lives (even outside of work) that is strangely enriching and bonding.

To extend this even further - I was looking through my Facebook feed just this week, and I realised that I have become close friends with a vast majority of people that I have worked with face to face over the decades.  Remote workers much less so.  For some reason when a former remote staff member posts about their family or holiday or other life event, I find myself a lot less engaged with their thoughts and feelings.  There is still an element of them being an unknown 'stranger' so that any such intimate details of their lives instills a sense of guilt that I tend to deliberately avoid seeming too familiar or presumptuous when reading their posts.

While my recently departed co-founder and I had discussed an actual company meetup where we (and potential future staff) could meet face to face, it never happened during our working time together.  And now that my co-founder has moved on, I have accepted that we will probably never, ever meet in real life.

I am in the process of building up a whole new remote team now though, and am looking at strategies to try and counter this feeling of disconnection with those that I will figuratively work alongside for the coming years.

Regular company face to face meetups are definitely on the cards.  But I am also thinking that we might need to put something else in place outside of those times.

But what could take the virtual place of those little moments like tossing a paper plane across the office to see whose desk it would land on, or the understanding look that I would share with a colleague across from me after hanging up from a talking to a difficult client, or the good natured group ribbing that would happen when a co-worker brought a delicious smelling lunch into the office?  I have yet to see a web or mobile app that can replicate this sort of interaction.

Perhaps I have to go and invent it?

Building a pricing screen which reflects local currency


On my morning walks, I have been enjoying listening to SaaS related podcasts, and yesterday morning, I was listening to Jane Portman's "UI Breakfast" podcast in which she was talking to Rob Turlinckx about SaaS Pricing pages.

Now, we have just done a revamp of our own HR Partner pricing page, which actually meets most of the suggestions of what they talked about in the podcast, i.e. offering multiple currencies, showing the user's local currency automatically upon loading, showing monthly vs annual pricing etc.

HRP Pricing Page.png

Our page is quite complex (but thankfully I have a great UX designer on my team who made it as easy to use as possible), but I thought I would expand upon a couple of things that we did on there in order to display pricing in various currencies, and most importantly, how we detected the user's location and showed them the relevant pricing in their own local currency (or else defaulting back to USD if they were in a location outside of our usual pricing).

What I have done is to put together a simple pricing page on Github which you are welcome to explore and dissect.  My aim was to be able to achieve localised pricing with (a) minimal javascript code, (b) doing it all on one page and (c) for absolutely FREE - no calling upon expensive landing page or A/B testing services to generate different pricing pages at all!

This is what it looks like (yep, I'm a coder, not a designer!):

TEST Pricing Page.png

Take a look at it running live here:

Here is a Gist of the actual web page code:

<!DOCTYPE html>
<html lang="en">


    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta name="description" content="">
    <meta name="author" content="">

    <title>Demonstration of dynamic currency display</title>

    <!-- Bootstrap core CSS -->
    <link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">

    <!-- Custom styles for this template -->
    <link href="css/heroic-features.css" rel="stylesheet">



    <!-- Navigation -->
    <nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
      <div class="container">
        <a class="navbar-brand" href="#">Widgets Inc.</a>
        <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
          <span class="navbar-toggler-icon"></span>
        <div class="collapse navbar-collapse" id="navbarResponsive">
          <ul class="navbar-nav ml-auto">
            <li class="nav-item active">
              <a class="nav-link" href="#">Home
                <span class="sr-only">(current)</span>
            <li class="nav-item">
              <a class="nav-link" href="#">About</a>
            <li class="nav-item">
              <a class="nav-link" href="#">Services</a>
            <li class="nav-item">
              <a class="nav-link" href="#">Contact</a>

    <!-- Page Content -->
    <div class="container">

      <!-- Jumbotron Header -->
      <header class="jumbotron my-4">
        <h1 class="display-3">Hello there!</h1>
        <p class="lead">The pricing shown below should correspond to your location (or default to the US) like the pricing page on our <a href="" target="_blank">HR Partner</a> site.</p>
        <a href="" class="btn btn-primary btn-lg">See it in action!</a>

      <div class="row text-center">
        <div class="col-lg-12 m-2">
          <p>Show me pricing in: </p>
          <div class="btn-group mb-3" role="group" aria-label="Select Currency">
            <button type="button" class="btn btn-secondary" onclick="displayPrice('USD');">USD</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('GBP');">GBP</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('EUR');">EUR</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('AUD');">AUD</button>

      <!-- Page Features -->
      <div class="row text-center">

        <div class="col-lg-3 col-md-6 mb-4">
          <div class="card">
            <div class="card-header">
              <h2 class="text-primary">FREE</h2>
            <div class="card-body">
              <h4 class="card-title pricing USD_pricing">USD $0</h4>
              <h4 class="card-title pricing GBP_pricing collapse">GBP &pound;0</h4>
              <h4 class="card-title pricing EUR_pricing collapse">EUR &euro;0</h4>
              <h4 class="card-title pricing AUD_pricing collapse">AUD $0</h4>
              <p class="card-text text-info">Our free plan will suit the Scrooge McDuck's among you.</p>
            <div class="card-footer">
              <a href="#" class="btn btn-primary">Find Out More!</a>

        <div class="col-lg-3 col-md-6 mb-4">
          <div class="card">
            <div class="card-header">
              <h2 class="text-primary">Basic</h2>
            <div class="card-body">
              <h4 class="card-title pricing USD_pricing">USD $10</h4>
              <h4 class="card-title pricing GBP_pricing collapse">GBP &pound;8</h4>
              <h4 class="card-title pricing EUR_pricing collapse">EUR &euro;9</h4>
              <h4 class="card-title pricing AUD_pricing collapse">AUD $14</h4>
              <p class="card-text text-info">This basic plan should get you kick started.</p>
            <div class="card-footer">
              <a href="#" class="btn btn-primary">Find Out More!</a>

        <div class="col-lg-3 col-md-6 mb-4">
          <div class="card">
            <div class="card-header">
              <h2 class="text-primary">Medium</h2>
            <div class="card-body">
              <h4 class="card-title pricing USD_pricing">USD $50</h4>
              <h4 class="card-title pricing GBP_pricing collapse">GBP &pound;38</h4>
              <h4 class="card-title pricing EUR_pricing collapse">EUR &euro;42</h4>
              <h4 class="card-title pricing AUD_pricing collapse">AUD $67</h4>
              <p class="card-text text-info">For businesses that really need all the bells and whistles.</p>
            <div class="card-footer">
              <a href="#" class="btn btn-primary">Find Out More!</a>

        <div class="col-lg-3 col-md-6 mb-4">
          <div class="card">
            <div class="card-header">
              <h2 class="text-primary">Enterprise</h2>
            <div class="card-body">
              <h4 class="card-title pricing USD_pricing">USD $200</h4>
              <h4 class="card-title pricing GBP_pricing collapse">GBP &pound;152</h4>
              <h4 class="card-title pricing EUR_pricing collapse">EUR &euro;171</h4>
              <h4 class="card-title pricing AUD_pricing collapse">AUD $270</h4>
              <p class="card-text text-info">If you have more money than Elon Musk, then this is the plan for you.</p>
            <div class="card-footer">
              <a href="#" class="btn btn-primary">Find Out More!</a>

      <!-- /.row -->

    <!-- /.container -->

    <!-- Footer -->
    <footer class="py-5 bg-dark">
      <div class="container">
        <p class="m-0 text-center text-white">Copyright &copy; Widgets Inc. 2018</p>
      <!-- /.container -->

    <!-- Bootstrap core JavaScript -->
    <script src="vendor/jquery/jquery.min.js"></script>
    <script src="vendor/bootstrap/js/bootstrap.bundle.min.js"></script>

      $(document).ready(function () {
        $.get("", function (response) {
          var detectedCurrency = response.currency.code;
        }, "jsonp");

      displayPrice = function(currency) {
        // First, lets hide all the current pricing
        // Is the currency within the valid range of currencies that we wish to show?
        if (currency !== null || (["USD", "AUD", "GBP", "EUR"].indexOf(currency) > -1)) {
          // If yes, then show the currency
          $("." + currency + "_pricing").show();
        } else {
          // If no, then just show USD pricing




If you want the full source code (with the Bootstrap and jQuery libraries etc. so you can test on your own server), then you can clone my code from my Github repository:

Let's break down the code here.

Firstly, I am using a simple Bootstrap 4 page layout, which has a header block, then 4 columns for the pricing.  If you look at each pricing column though, I have included the 4 currencies that I want to show:

        <div class="col-lg-3 col-md-6 mb-4">
          <div class="card">
            <div class="card-header">
              <h2 class="text-primary">FREE</h2>
            <div class="card-body">
              <h4 class="card-title pricing USD_pricing">USD $0</h4>
              <h4 class="card-title pricing GBP_pricing collapse">GBP &pound;0</h4>
              <h4 class="card-title pricing EUR_pricing collapse">EUR &euro;0</h4>
              <h4 class="card-title pricing AUD_pricing collapse">AUD $0</h4>
              <p class="card-text text-info">Our free plan will suit the Scrooge McDuck's among you.</p>
            <div class="card-footer">
              <a href="#" class="btn btn-primary">Find Out More!</a>

But have a look at the `collapse` class used in all but the USD pricing in the source code.  What this does is 'collapses' the other currencies so that they are not visible upon page load, instead showing you the USD pricing as a default.

I have also given all the pricing <h4> tags the class of `pricing` and `XXX_pricing` (where XXX is the 3 letter currency code for each locale).  You will see how I use these later to both hide and show the relevant pricing via a simple javascript function.

Now lets look at the javascript code at the bottom of the page.  There are two blocks we need to look at, namely:

      $(document).ready(function () {
        $.get("", function (response) {
          var detectedCurrency = response.currency.code;
        }, "jsonp");

This bit of code waits until the page is completely loaded, then goes off to the free service to query the user's locale information, including the currency code associated with their locale.

This information is contained within the JSON response from the IPData service, and you can get to it via the response.currency.code variable.

Be warned that this call, even though it is done as a background asynchronous AJAX call, can take a few seconds to return a result - which is why we show the default USD pricing initially, rather than no pricing at all.  This can cause a disconcerting flicker upon page load, so you may want to show NO pricing on your own page as a default, which you can do by adding the `collapse` class to ALL pricing lines initially.  It is entirely up to you.

TIP: Just remember to replace the 'xxxxxxxxx' dummy API key above with your free one that you get from!

The next bit of javascript is the one that manipulates the page DOM to show or hide the relevant currencies:

      displayPrice = function(currency) {
        // First, lets hide all the current pricing
        // Is the currency within the valid range of currencies that we wish to show?
        if (currency !== null || (["USD", "AUD", "GBP", "EUR"].indexOf(currency) > -1)) {
          // If yes, then show the currency
          $("." + currency + "_pricing").show();
        } else {
          // If no, then just show USD pricing

This function takes just one parameter, the 3 letter currency code, and then it:

  1. Hides ALL the pricing lines by default, then
  2. Checks the currency code to see if it is one of the 4 allowed codes on our page, then
  3. If it is allowed, shows the pricing which has the class of `[Currency Code]_pricing`, or
  4. Shows the default USD pricing lines.


One last thing - I realise that sometimes users may want to see the pricing in other currencies themselves, rather than in their local currency.  Most often when they want to compare the USD values against other services they use etc., so we should allow them the ability to do so.

Which is why I placed the button group between the header and the pricing boxes which asks for the currency they want to see.  Clicking on any of the buttons will call the `displayPrice()` function to show that locale's currency.

      <div class="row text-center">
        <div class="col-lg-12 m-2">
          <p>Show me pricing in: </p>
          <div class="btn-group mb-3" role="group" aria-label="Select Currency">
            <button type="button" class="btn btn-secondary" onclick="displayPrice('USD');">USD</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('GBP');">GBP</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('EUR');">EUR</button>
            <button type="button" class="btn btn-secondary" onclick="displayPrice('AUD');">AUD</button>


That's it!  Pretty easy (and cheap), isn't it?  No need for a complex content management system, or PHP/Ruby scripting etc.  This can all be done on a free website hosting platform like Amazon S3 (which we use) or Github Pages etc. 

Have fun with it.  For my next post, I might showcase how to use a free foreign currency exchange API to dynamically calculate the other pricing based on that day's exchange rates!


Is verbosity helpful when designing app screens?

Apologies for the lengthy absence from posting on here.  Now that I have grown my HR Partner team a little, I have some spare time on my hands, plus some renewed motivation and energy to work on improving the system with them.

As a "programmer pretending to be a designer", I am always accused of making my applications screens just too verbose.  I tend to pepper the screen real estate with hints, tips and (what I think are) helpful snippets of information that will make the user's life easier.

Of course, when we did some real world UX testing a few months ago, I was astounded to see that most users simply didn't read the information presented to them, but instead would look for distinct CTA (call to action) links or buttons and try those out instead.

This has made me rethink my whole verbose strategy, and made me remove a lot of excess wording from many of our HR app's screens (with the able assistance and guidance of both my talented former and new UX designer).  Conceptually, this has been a hard thing for me to do - removing what I thought were helpful prompts, and replacing them with an image or single word link to our help pages.

However, there are some screens where detailed explanations ARE still necessary - mainly the screen which deals with importing a CSV file into HR Partner.  Seeing as this is a screen which a lot of our new users use, as well as the fact that we have absolutely no control over the layout and format of the CSV import file the customer supplies, I thought that some extra explanations at the bottom of the import screen may be useful to guide them to a pain free import process.

Here was the old explanation text at the bottom of the CSV import screen:

Screen Shot 2018-07-10 at 9.16.28 am.png

As you can see - very wordy.  But what niggled at my UX designer the most was that the explanations for Gender, Departments, Locations etc. were still fairly vague, and worse still, prompted them to leave the import screen and go to another screen in order to look at what the valid import options were.

What she suggested was that we actually present the valid options all on the one screen, which means that they can check and modify their import file without having to leave this app screen, and get the chance to be distracted or lose interest.

So, the new screen looks like:

Screen Shot 2018-07-10 at 9.09.45 am.png

Because categories such as Department or Employment Status only have about 5 or 6 items in them, it was no problem to actually list them out on this screen directly.  As a bonus, we also modified the import code to use some default values if the information supplied in the import file was missing or invalid.

We actually added more words to the mix, but I am hoping that in this instance, the extra information will help the user to create a better import file and have a better user experience at the end of the day.

Can you think of any other way we can improve on this? I'd love to hear your thoughts in the comments.


Racing Along - Building a Telemetry system using Crystal & RethinkDB


Like most younger lads, I often dreamed of being a Formula 1 race car driver, and I have fond memories of watching the likes of Ayrton Senna, Alain Prost, Nigel Mansell etc. race around Adelaide in the late 80's.  The smell, action and romance of F1 always appealed to me.

Alas, my driving skills are barely passable on the public roads, so a race track is a far safer place without me hurling a one ton machine around it.  I have kept in touch with the technological advances within the competition though, and am amazed at how far it has come these days.  I distinctly remember Jackie Steward stopping the race commentary back in the 80's so we could hear one of the first radio transmissions between driver and engineer.  I think it was Alain Prost, and the quality of the transmission was so bad that no one could work out what Prost was saying.

Nowadays, a wealth of data is sent between race car and the engineers in the pit wall, and even to the main team HQ across the other side of the world - who often know the health of the car far better than the driver piloting it at 300km/h.

Back to me.  I've been vicariously working out my lost race driver frustrations on Codemaster's F1 games for the past few years, which are quite realistic, with better graphics and simulation each year.  I only recently found out that Codemasters actually supplies a telemetry feed from their game via UDP, in real time.  I was excited to see so many third party vendors creating apps and race accessories that use this feed (e.g. steering wheels with speed, engine rev and gear displays on them).

Last weekend I thought to myself - "Why don't I try and create a racing telemetry dashboard? The kind that the race engineers or the team engineers back in HQ would use?".  Could I in fact, create a real time dashboard that ran on a web browser and could let someone on the other side of the world watch my car statistic in real time as I blasted around a track?

Well, lets start with the F1 2017 game itself.  It can send a UDP stream to a specific address and port, or just broadcast the stream on a subnet on a specific port.  The secret is to try and latch on to that stream, and either store it, or preferably send it on to another display in real time.

The question was, what technology could I use to grab this UDP feed?  Well, I have recently been dabbling with a new language called Crystal.  It is very similar to Ruby, which I have been using on all my web apps in the past few years, however instead of being an interpreted language, it is compiled, which gives it blazing speed.

Speed is the key here (and not only on the track).  The UDP data is transmitted at anything from 20 to 60Hz.  A typical 90 second race lap could see anything from 1500 to 4000 packets of data sent across.

I decided that I would need to do two things - capture that stream of data into a database for later historical reporting, AND also parse and send this data along to any web browsers that were listening, which meant I had to use a constant connection system like Websockets.  Now, the other bonus is that Crystal's Websocket support is top class too!

So what I did was to write a small (about 150 lines) Crystal app that could do this.  I ended up using the Kemal framework for Crystal, because I needed to build out some fancy display screens etc., and Kemal brings all the MVC goodies to the Crystal language.

Straight away, I came across the first problem I would encounter with trying to consume a constant stream of telemetry data.  Codemaster's sends the data as a packet of around 70 Float numbers.  Luckily, they document what the numbers indicate on their forums, but I have to firstly, consume the packet, then parse the packet to extract the bits of data I need from it (i.e. the current gear selected, the engine revs, the brake temperatures for each of the 4 tyres etc.), then I need to store that information in RethinkDB (which is one of my favourite NoSQL systems out there today), and THEN send the (parsed) packet data to any listening web browser who had an active websocket connection.  Whew.

But really, the actual core lines of code to that took only about 20 lines (excluding the parsing of the 70 odd parameters.  How could I do this effectively?  Well, Crystal has a concept of multi threading, or, multiple Fibers to use their terminology.  I would simply consume the incoming UDP packets on one fiber, then spawn another thread to do the parsing, saving and handing off of the data to the websocket!  It worked beautifully.

Here is a shortened version of the core code that does this bit:

SOCKETS = [] of HTTP::WebSocket
raw_data =

# fire up the UDP listener
puts "UDP Server listening..."
server =
server.bind "", 27003
udp_active = false

# now connect to rethinkdb
puts "Connecting to RethinkDB..."
conn = r.connect(host: "localhost")

def convert_data(raw_data, offset)
  pos = offset * 4
  slice = {raw_data[pos].to_u8, raw_data[pos+1].to_u8, raw_data[pos+2].to_u8, raw_data[pos+3].to_u8}
  return pointerof(slice).as(Float32*).value.to_f64

ws "/telemetry" do |socket|
  # Add this socket to the array
  SOCKETS << socket
  # clear out any old data collected in the UDP stream
  puts "Socket server opening..."
  udp_active = true
  socket.on_close do
    puts "Socket closing..."
    SOCKETS.delete socket
    # Stop receiving the UDP stream when the last socket closes
    udp_active = false if SOCKETS.empty?

  spawn do
    while udp_active
      bytes_read, client_addr = server.receive(raw_data)
      telemetry_data["m_time"] = convert_data(raw_data, 0)
      telemetry_data["m_lapTime"] = convert_data(raw_data, 1)
      telemetry_data["m_lapDistance"] = convert_data(raw_data, 2)
      telemetry_data["m_totalDistance"] = convert_data(raw_data, 3)
      telemetry_data["m_last_lap_time"] = convert_data(raw_data, 62)
      telemetry_data["m_max_rpm"] = convert_data(raw_data, 63)
      telemetry_data["m_idle_rpm"] = convert_data(raw_data, 64)
      telemetry_data["m_max_gears"] = convert_data(raw_data, 65)
      telemetry_data["m_sessionType"] = convert_data(raw_data, 66)
      telemetry_data["m_drsAllowed"] = convert_data(raw_data, 67)
      telemetry_data["m_track_number"] = convert_data(raw_data, 68)
      telemetry_data["m_vehicleFIAFlags"] = convert_data(raw_data, 69)
      xmit = telemetry_data.to_json
        SOCKETS.each {|thesocket| thesocket.send xmit}
        puts "Socket send error!"


NOTE: Port 27003 for the USP listening port.  27 was the late, great Ayrton Senna's racing number, and he won 003 World Driver's Championships in his time!

That is really the core of the system. The first few lines set up a UDP listener, and also the connection to RethinkDB.  Then there is a short routine I define which converts the incoming little endian FLOAT values to a big endian Float64 value that Crystal expects.  Then there is the Websocket listener which grabs the incoming packets, and spawns a fiber to process it when it comes in.

The rest of the system is a pretty basic Bootstrap based web site with 3 pages.  Oh yeah - Crystal serves up these web pages as well, along with customising sections via ERC templates.  Not bad for a single executable that is only around 2MB when compiled!

There is a Live page which uses a Websocket listener to stream the live data to various realtime moving FLOT graphs, as well as the car position on a track map:


Then there is a historical data page which allow the engineer to plot race data lap by lap for an already run race:

F1 Historic Telemetry.png

Then a Timing page which shows lap times extracted from the data stream:

F1 Lap Times.png

No space or time to go into those parts in detail here, so I might save those for another blog post.

My main intent with this post was to try and learn Crystal, and to see if I could build a robust and fast Websocket server.  Mission achieved.

I must say I had great fun using this system - I actually had my son play the game on our PS4 while I watched him on my iMac web browser from my office on a different floor of the house altogether.  I could even tell when he struggled on certain parts of the track (the game sends car position data in real time too), and I could see when he was over revving his engines or cooking his brakes trying to pass another car.  This was a 10/10 as far as a fun project goes, no matter the impracticality of it.


Building a face recognition app in under an hour

Over the weekend, I was flicking through my Amazon AWS console, and I noticed a new service on there called 'Rekognition'.  I guess it was the mangled spelling that caught my attention, but I wondered what this service was? Amazon has a habit of adding new services to their platform with alarming regularity, and this one slipped past my radar somehow.

So I dived in and checked it out, and it turns out that in late 2016, Amazon released their own image recognition engine on their platform.  It not only does facial recognition, but general photo object identification too.  It is still fairly new, so the details were sketchy, but I was immediately excited to try it out.  Long story short, within an hour, I had knocked up a quick sample web page that could grab photos from my PC camera and perform basic facial recognition on it.  Want to know how to do the same? Read on...

I had dabbled in facial recognition technology before, using third party libraries, along with the Microsoft Face API, but the effort of putting together even a rudimentary prototype was fraught with complexity and a steep learning curve.  But while browsing the Rekognition docs (thin as they are), I realised that the AWS API was actually quite simple to use, while seemingly quite powerful.  I couldn't wait, and decided to jump in feet first to knock up a quick prototype.

The Objective

I wanted a 'quick and dirty' single web page that would allow me to grab a photo using my iMac camera, and perform some basic recognition on the photo - basically, I wanted to identify the user sitting in front of the camera.

The Amazon Rekognition service allows you to create one or more collections.  A collection is simply a, well, collection of facial vectors for sample photos that you tell it to save.  NOTE: The service doesn't store the actual photos, but a JSON representation of measurements obtained from a reference photo.

Once you have a collection on Amazon, you can then take a subject photo and have it compare the features of the subject to its reference collection, and return the closest match.  Sounds simply doesn't it?  And it is.  To be honest, coding the front end of this web page to get the camera data actually took longer than the back end to perform the recognition - by a factor of 3 to 1 !!

So, in short, the web page lets you (1) create or delete a collection of facial data on Amazon, (2) upload face data via a captured photo to your collection, and (3) compare new photos to the existing collection to find a match.

Oh, and as a tricky extra (4), I also added in the Amazon Polly service to this demo so that after recognising a photo, the page will broadcast a verbal, customised greeting to the person named in the photo!

The Front End

My first question was what library to use to capture the image using my iMac camera.  After a quick Google search, I found the amazing JPEG Camera library on GitHub by amw, which allows you to use a standard HTML5 canvas to perform the capture, or fallback to a Flash widget for older browsers.  I quickly grabbed the library, and modified the example javascript file for my needs.

The Back End

For the back end, I knocked up a quick Sinatra project, for a lightweight Ruby based framework that could do all the heavy lifting with AWS.  I actually used Sinatra extensively (well, Padrino actually) to build all my web apps, and highly recommend the platform.

Note: Amazon Rekognition example actually promote uploading the source photos used in their API to an Amazon S3 bucket first, then processing them.  I wanted to avoid this double step and send the image data directly to their API instead, which I managed to do.

I also managed to do a similar thing with their Polly greeting.  Instead of saving the audio to an MP3 file and playing that, I managed to encode the MP3 data directly into an <audio> tag on the page and play it from there!

The Code

I have placed all the code for this project on my GitHub page.  Feel free to grab it, fork it and improve it as you like.  I will endeavour to explain the code in more detail here.

The Steps

First things first, you will need an Amazon AWS account.  I won't go into the details of setting that up here, because there are many articles you can find on Google for doing so.

Creating an AWS IAM User

But once you are set up on AWS, the first thing we need to do is to create an Amazon IAM (Identity & Access Management) user which has the permissions to use the Rekognition service.  Oh, we will also set up permissions for Amazon's Polly service as well, because once I got started on these new services, I could not stop.

In the Amazon console, click on 'Services' in the top left corner, then choose 'IAM' from the vast list of Amazon services.  Then, on the left hand side menu, click on 'Users'.  This should show you a list of existing IAM users that you have created on the console, if you have done so in the past.

Click on the 'Add User' blue button on the top of this list to add a new IAM user.

Give the user a recognisable name (more for your own reference), and make sure you tick 'Programmatic Access' as you will be using this IAM in an API call.

Next is the permissions settings.  Make sure you click the THIRD box on the screen, that says 'Attach existing policies directly'.  Then, on the 'Filter: Policy Type' search box below that, type in 'rekognition' (note the Amazonian spelling) to filter only the Rekognition policies. Choose 'AmazonRekognitionFullAccess' from the list by placing a check mark next to it.

Next, change the search filter to 'polly', and place a check mark next to 'AmazonPollyFullAccess'.

Nearly there.  We now have full permission for this IAM for Amazon Rekognition and Amazon Polly.  Click on 'Next: Review' on the bottom right.

On the review page, you should see 2 Managed Policies giving you full access to Rekognition and Polly.  If you don't, go back and re-select the policies again as per the previous step.  If you do, then click 'Create User' on the bottom right.

Now this page is IMPORTANT.  Make a note of the AWS Key and Secret that you are given on this page, as we will need to incorporate it into our application below.  

This is the ONLY time that you will be shown the key/secret for this user, so please copy and paste the info somewhere safe, and download the CSV file from this page with the information in it and keep it safe as well.

Download the Code

Next step, is to download the sample code from my GitHub page so you can modify it as necessary.  Go to this link and either download the code as ZIP file, or perform a 'git clone' to clone it to your working folder.

First thing you need to do is to create a file called '.env' in your working folder, and enter these two lines, substituting your Amazon IAM Key and Secret in there (Note: These are NOT real key details below):

export AWS_KEY=A1B2C3D4E5J6K7L10
export AWS_SECRET=T/9rt344Ur+ln89we3552H5uKp901

You can also just run these two lines on your command shell (Linux and OSX) to set them as environment variable that the app can use.  Windows user can run them too, just replace the 'export' prefix with 'set'.

Now, if you have Ruby installed on your system (Note: No need for full Ruby on Rails, just the basic Ruby language is all you need), then you can run

bundle install

to install all the pre-requisites (Sinatra etc.), then you can type

ruby faceapp.rb

to actually run the app.  This should start up a web browser on port 4567, so you can fire up your browser and go to 


to see the web page and begin testing.

Using the App

The web page itself is fairly simple.  You should see a live streaming image on the top center, which is the feed from your on board camera.

The first thing you will need to do is to create a collection by clicking the link at the very bottom left of the page.  This will create an empty collection on Amazon's servers to hold your image data.  Note that the default name for this collection is 'faceapp_test', but you can change that on the faceapp.rb ruby code (line 17).

Then, to begin adding faces to your collection, ask several people to sit down in front of your PC or table/phone, and make sure their face is in the photo frame ONLY (Multiple faces will make the scan fail).  Once ready, enter their name in the text input box and click the 'Add to collection' button.  You should see a message that their facial data has been added to the database.

Once you have built up several faces in your database, then you can get random people to sit down in front of the camera and click on 'Compare image'.  Hopefully for people who have been already added to the collection, you should get back their name on screen, as well as a verbal greeting personalised to their name.

Please note that the usual way for Amazon Rekognition to work is to upload the JPEG/PNG photo to an Amazon S3 Bucket, then run the processing from there, but I wanted to bypass that double step and actually send the photo data directly to Rekognition as a Base64 encoded byte stream.  Fortunately, the aws-sdk for Ruby allows you to do both methods.

Lets walk through the code now.

First of all, lets take a look at the we page raw HTML itself.

This is a really simple page that should be self explanatory to anyone familiar with HTML creation.  Just a series of names divs, as well as buttons and links.  Note that we are using jQuery, and also Moment.js for the custom greeting.  Of note is the faceapp.js code, which does all the tricky stuff, and the links to the JPEG camera library.

You may also notice the <audio> tags at the bottom of the file, and you may ask what this is all about - well, this is going to be the placeholder for the audio greeting we send to the user (see below).

Let's break down the main app js file.

This sets up the JPEG Camera library to show the camera feed on screen, and process the upload of the images.

The add_to_collection() function is straightforward, in that it takes the captured image from the camera, then does a post to the /upload endpoint along with the user's name as the parameter.  The function will check that you have actually entered a name or it will not continue, as you need a short name as a unique identifier for this facial data.

The upload function simply checks that the call to /upload finished cleanly, and either displays a success message or the error if it doesn't.

The compare_image() function is what gets called when you click the, well, 'Compare image' button.  It simply grabs a frame from the camera, and POSTs the photo data to the /compare endpoint.  This endpoint will return either an error, or else a JSON structure containing the id (name) of the found face, as well as the percentage confidence.

If there is a successful face match, the function will then go ahead and send the name of the found face to the /speech endpoint.  This endpoint calls the Amazon Polly service to convert the custom greeting to an MP3 file that can be played back to the user.

The Amazon Polly service returns the greeting as a binary MP3 stream, and so we take this IO stream and BaseEncode64 it, and place it as an encoded source link in the <audio> placeholder tags on our web page, which we can then do a .play() on the element in order to play the MP3 through the user's speakers using the HTML5 Web Audio API.

This is also the first time I have placed encoded data in the audio src attribute, rather than a link to a physical MP3 file, and I am glad to report that it worked a treat!

Lastly on the app js file is the greetingTime() function.  All this does is work out whether to say 'good morning/afternoon/evening' depending on the user's time of day.  A lot of code for something so simple, but I wanted the custom greeting they hear to be tailored to their time of day.

Lastly, lets look at the Ruby code for the Sinatra app.

Pretty straightforward Sinatra stuff here.  The top is just the requires that we need for the various AWS SDK and other libraries.

Then there is a block setting up the AWS authentication configuration, and the default collection name that we will be using (which you can feel free to change).

Then, the rest of the code is simply the endpoints that Sinatra will listen out for.  It listens for a GET on '/' in order to display the actual web page to the end user, and it also listens out for POST calls to /upload, /compare and /speech which the javascript file above posts data to.  Only about 3 or 4 lines of code for each of these endpoints to actually carry out the facial recognition and speech tasks, all documented in the AWS SDK documentation.

That's about all that I can think of to share at this point.  Please have fun with the project, and let me know what you end up building with it.  Personally, I am using this project as a starting block for some amazing new features that I would love to have in our main web app HR Partner.

Good Luck, and enjoy your facial recognition/speech synthesis journey.





TopHN - A fun side project built with Vue.js and RethinkDB

 TopHN running in a side window so I can see news bubbling up and down in real time while I work away... (Yes, what you see is some actual code from the project - don't laugh!).

TopHN running in a side window so I can see news bubbling up and down in real time while I work away... (Yes, what you see is some actual code from the project - don't laugh!).

Over the past couple of years, I have tried to push my ageing brain constantly, and one of the best ways I've found to do that is to try and learn a new programming language, framework or methodology every month or so, just to keep the skills sharp.

I've always had a love/hate relationship with NoSQL databases, having cut my teeth for many decades on pure SQL systems, so I wanted to get my hands dirty with that.  I've also struggled a little bit to get to grips with Javascript front end frameworks, and wanted to improve my skill sets in that area.

So this past weekend, I decided to get 'down and dirty' with Vue.js as well as RethinkDB.  There is a lot of good natured banter amongst programmers about React vs Vue vs Angular etc. and I wanted to see for myself which one would suit my programming style better.  I had already done a lot of work in Angular v1 with my mobile app development (using Cordova and Ionic), and wanted to see if Angular v2 and the other frameworks I mentioned would be an easy transition.

Long story short, I had a bit of trouble getting my head around Angular v2, as well as React.  At the end of the day, Vue.js just seemed more natural, and possibly closer to Angular v1 to me, and I found myself being able to understand concepts and start knocking together a basic app within short order.

RethinkDB has also been in the news lately, with their parent company shutting down, although the database itself looks like it will live on as open source.  I've always liked the look of the RethinkDB management console, as well as the ease of installation on various platforms, so I decided to install it on my development Mac and give it a go.

The Project

The big question is - what to build?  I wanted to build something actually useful, instead of just another throwaway project.  Then, one day last week while I was browsing around Hacker News, it hit me.

Now, I love browsing Hacker News, and catching up with the latest tech articles, but one of the things that I found myself repeatedly doing was (a) refreshing the main 'Top News' screen every few minutes to see what people were talking about, and what had made its way to the Top 30, and (b) checking the messages that I had personally posted recently, to see if there were any replies to them, and (c) constantly checking my Karma balance on the top of the screen to see if there had been a mass of up or downvotes to anything I had posted.

These three things seemed to be my primary activities on the site (apart from reading articles), so I decided to see if I could build a little side project to make it easier.  So TopHN was born!

What is TopHN in a nutshell? Well, it is basically a real time display of top news activity on your web screen.  To be fair, there are already a LOT of other Hacker News real time feeds available out there, many which are far better than mine - but I wanted my solution to be very specific.  Most of the others display comments and other details, but I wanted my solution to be just a 'dashboard' style view of the top, important stuff that was relevant to me (and hopefully most other users too).

First things first, I decided to take a look at the HackerNews API.  I was excited to see that this was based on Google's Firebase.  I had used Firebase in a couple of mobile programming jobs 2 years ago, and really loved the asynchronous 'push' system they used to publish changes.  I debated whether to use the Firebase feed directly, but decided that No, because I was going to be doing some other manipulation and polling of data, that I didn't want to clutter up the Firebase feed directly with more poll requests, but instead would try and replicate the HN data set in RethinkDB.

So I went ahead and set up a dedicated RethinkDB server in the cloud.  This was a piece of cake following their instructions.  One the same server, I built a small Node.js app (only about 30 lines of code), whose sole purpose was to listen to the HN API feed from Firebase, and grab the current data and save a snapshot of them in my RethinkDB database.

Hacker News actually publishes some really cool feeds - every 30 seconds or so, a list of the top 500 articles are pushed out to the world as a JSON string.  Also, they have a dedicated feed which pushes out a list of changes made every 20 to 30 seconds.  This includes a list of article and comment ids that have been changed or entered in their system, as well as the user ids of any users who had changed their status (i.e. made profile changes, or had their karma increased/decreased by someone, or posted a comment etc.).

I decided to use these two feeds as the basis for building my replicated data set.  Every time the 'Top 500' feed would be pushed out, I would grab the id's of the articles, have a quick look in RethinkDB to see if they already existed, and if they didn't, I would go and ask for the missing articles individually, and plop those in RethinkDB.  After a few days of doing this, I ended up with tens of thousands of articles in my database.

I would also sniff out the 'changes' feed, and scan the articles in there to see if I already had them, and copy them if not.  Same with the users.  Every time a user was mentioned in the 'changes' feed, I would grab their updated profile and save in RethinkDB.

The screenshot above shows the RethinkDB management console, a really cool tool for checking server performance, as well as testing queries and managing data tables and shards.

So far so good.  The replicated database was filling up with data every few seconds.  Now, the question was - What to do with it?

I was excited to see that RethinkDB also had a 'changes()' feature, which would publish data changes as they happened.  But unlike the Firebase tools, these weren't client side only tools, and needed some sort of server platform to engage the features.  So what I decided on, was to use another Node.js app as the server back end, and use Vue.js as the front end for the interface elements.

I would also need to build a connection between the two using  I was a bit disappointed that there didn't seem to be any native way to push/pull the changes from server to client without it, but hey - we are all about learning new things, and building a socket driven app was certainly something I hadn't done before (at least not from scratch).

So, end of the day, this second Node.js app would sit on a different server, and and wait for a user to visit the site.  Now, users can do a couple of things.  They can simply visit the top level URL of the site, and just see the Top 30 feed in real time.  And I mean nearly real time.  As new articles are published, or they move up and down the Top 30, the page view will bubble them up and down and show the latest scores and comment counters.

If the user elected to enter in their HN username, the page would additionally also display the user's Karma balance in real time, along with a notation for how much it has changed in the last couple of minutes.  Nothing like vanity metrics to keep people excited!

Also, if their username is entered, the page will show their last 10 or so comments and stories they published, so they can keep an eye on any responses to comments etc.

The second Node.js server is essentially a push/pull server.  It will silently push Top 30 list changes to all web browsers connected to it.  AND it will also set up a custom push event handler for any browsers where the user has specified their username.  As you can expect, this take a bit of management, and server resources, so I hope I never get to experience the HackerNews 'hug of death' where a bunch of people log on at the same time, because I am not really sure of how far this will scale before it comes to a screaming halt.

The Vue.js components purely sit there and listen for JSON data packets from the server pushes, and then format them accordingly and display them on the web page without having to refresh.

I haven't gone into the nutty details of how I built this on here, but if there is any interest and I get lots of requests, then I am open to publishing some code snippets and going into deeper detail of how I built the various components.

All in all, I am pretty happy with what amounted to around 4 or 5 days of part time coding.  I think this is a useful tool, and as you can see from the header image, I tend to have a narrow Chrome window open off to the side so I can keep an eye on news happenings and watch them bubble up and down.  The web page is also totally responsive, and should work on most mobile browsers for portability.

Are you a Hacker News member? Why not check out and let me know what you think?


Building an IoT system using the Onion Omega and Amazon AWS

As well as being a programmer, I am a mad keen guitarist, and over the years, I have built up a sizeable collection of guitars of all types and models.  One thing about guitars though (acoustic guitars in particular), is that they are quite sensitive to environmental conditions such as temperature and humidity.

Similar to people, guitars like to kept and a relatively cool temperature and somewhere not too dry or damp.  Seeing as I live in the tropics, this can be a challenge at time, which is why I try and keep my guitars in my home office, which is secure, as well as air conditioned most of the time.

However, air conditioning is not perfect, and sometimes things like a power failure or someone leaving a window ajar can affect the overall climate of the room.  Because I often travel for work and am away from the home office for days at a time, I'd like to keep an eye on any anomalies, so I can advise another family member at home to check or rectify the situation.

What better way than to try and use my programming skills to (a) learn some new skills, and (b) do some experimenting with this whole IoT (internet of things) buzz.  Please note that my normal programming work involves business and enterprise type databases and reporting tools, so programming hardware devices is a new thing for me.

The end result is that I wanted a web page that I could access from ANYWHERE in the world, which would give me real time stats as to the temperature and humidity variations in the guitar room throughout a 24 hour period.

Please bear in mind, I am going to try and document ALL the steps I took to build this system, so this blog post is VERY long, but hopefully will serve as a guide for someone else who wants to build something similar.

The steps I will be going through here are:

1. Setting up the Omega Onion to work with my PC
2. Hooking up the DHT22 temperature and humidity sensor to my Onion
3. Installing all the requisite software on the Onion to be able to do what I want
4. Set up Amazon IoT so that the Onion can be a 'thing' on the Amazon IoT cloud
5. Setting up a DynamoDB database on Amazon AWS to store the temperature/humidity readings from the Onion
6. Setting up a web page to read the data from DynamoDB to present it as a chart.

Here is what the final chart will look like:

Hat tip: I used this blog post as inspiration for designing the dashboard and pulling data from DynamoDB.


The Hardware

Well, over a year ago I participated in the Onion Omega kickstarter project.  I'd got one of these tiny little thumb sized Linux computers but didn't quite know what to do with it so it sat in its box for a long while until I decided to dust it off this week.

Connecting the Onion up to it's programming board, I hooked it up to a USB cable from my iMac.  In order to get communications happening, I had to download and install a USB to UART driver from here:

Full instructions on connecting the Omega Onion to your Mac is on their Wiki page:

Once I had connected the two devices, I was able to issue the command

screen /dev/tty.SLAB_USBtoUART 115200 

from a Terminal screen to connect to the device.  Yay!

First thing I had to do was to set up the WiFi so that I could access the device using my local home office WIFi network.  That was a simple case of issuing the command


It is a simple step by step program that asks you for your WiFi access point name and security key.  Once again, the Wiki link above explains it in more detail.

Once the Wifi is setup on the Onion, you can then access it via its IP address using a web browser.  My device ended up being, so it was a matter of entering that address in Chrome.  Once logged in (the default username is 'root' and password 'onioneer'), you get to see this:

First things first, because my device was so old, I had to go to 'Settings' and run a Firmware Update.

I also dug out an old HDT22 sensor unit which I played around with when I dabbled in Arduino projects a while back.  I wondered if I could pair the HDT22 with the Onion device, and lo behold, a quick search on the Onion forums showed that this had been done before, quite easily.  Here is a blog post detailing how to hook up the HDT22 to the Onion:

The article shows you how to wire the two devices together using only 3 wires. In short, the wiring is as follows on my unit:

Pin 1 from the HDT22 goes to the 5.5V plug on the Omega Onion
Pin 2 from the HDT22 goes to GPIO port 6 on my Onion
Pin 3 is unused on the HDT22
Pin 4 from the HDT22 goes to the GND (Ground) plug on the Onion

The Software

Now we come to all the software that we will need to be able to collect the data, and send it along to Amazon.  In short, we will be writing all our code in Node.js.  But we will also be calling some command line utilities to (a) read the data from the HDT22 and (b) send it to the Amazon IoT cloud.

To collect the data, we will be using an app called 'checkHumidity' which is detailed on the page above about setting up the DHT22.  To talk to the Amazon IoT cloud, we need to use the MQTT protocol.  To do this, will be using an app called 'mosquitto' which is a nice, neat MQTT wrapper.  We can use HTTPS, but MQTT just seemed more efficient and I wanted to experiment with it.

So lets go through these steps for installation.  All the packages are fairly small, so it won't take up much room on the 16MB storage on the Onion.  I think my Onion still has about 2MB left after all installs.  Here goes (from the Onion command line):

(1) Install the checkHumidity app and set the permissions for running it.  checkHumidity is so much cleaner than trying to read the pins on the Onion in Node.js.  Running it returns the temperature (in degrees Celsius) and the humidity (as a percentage) in a text response.

opkg update
opkg install wget
cd /root
tar -zxvf 1450434316215-checkhumidity.tar.gz
chmod -R 755 /root/checkHumidity/bin/checkHumidity

If your HDT 22 is connected to pin 6 like my board, try it out:

/root/checkHumidity/bin/checkHumidity 6 HDT22

Showing me 29.6 degrees C wilth 49.301% humidity!

(2) Install Node.js on the Onion.  From here on in, we will be using the opkg manager to install:

opkg install nodejs

(3) I also installed nano because it is my favourite editor on Linux.  You can bypass this if you are happy with any other editor (Note: There is also an editor on the web interface, but I had some issues with saving on it):

opkg install nano

(4) Install the mosquitto app for MQTT conversations:

opkg install mosquitto
opkg install mosquitto-client

This installs the mosquitto broker and client.  We won't really be using the broker, mainly the client, but it is handy to have if you want to set up your Onion as an MQTT bridge later.

Amazon IoT

Ok, now we have almost everything prepped on the device itself, we need to set up a 'thing' on Amazon's IoT cloud to mimic the Onion.  The 'thing' you set up on Amazon acts as a cloud repository for information you want to store on your IoT device.  Amazon uses a concept of a 'shadow' for the 'thing' that can store the data.  That way, even if your physical 'thing' is powered off or offline, You can still send MQTT packets of data to the 'thing', and the data will be stored on the 'shadow' copy of the 'thing' in the cloud until the device comes back online, at which point Amazon can copy the 'shadow' data back to the physical device.

You see, our Node.js app will be pushing temperature and humidity data to the shadow copy of the 'thing' in the cloud.  From there, we can set up a rule on Amazon IoT to further push that data into a DynamoDB database.

Setting up the 'thing' on the cloud can be a little tricky.  Mainly due to the security.  Because the physical device will be working unattended and pretty much anonymously, authentication is carried out using security certificates.  Lets step through the creation of a 'thing'. (Note: This tutorial assumes you already have an AWS account set up).

From the Amazon Console, click on 'Services' on the top toolbar, then choose 'AWS IoT' under 'Internet Of Things'.

On the left hand menu, click on 'Registry', then 'Things'.

Your screen will probably be blank if you have never created a thing before.  Click on 'Create' way over on the top right hand side of your screen.

You will need to give you thing a name.  Call it anything you like.  I just used the unique name for my Omega Onion, which looks like Omega-XXXX.

Great!  Next, you will be taken to a screen showing all the information for your 'thing'.  Click on the 'Security' option on the left hand side.

Click on the 'Create Certificate' button.

You can now download all four certificates from this screen and store them in a safe place.

NOTE: DON'T FORGET to click on the link for 'A root CA for AWS IoT Download'.  This is the Root CA certificate that we will need later.  Store all 4 certificates in a safe place for now on your local hard drive.  Don't lose them or you will have to recreate the certificates again and re-attach policies etc.  Messy stuff.

Lastly, click on 'Activate' to activate your certificates and your thing.

Next, we have to attach a policy to this certificate.  There is a button marked 'Create Policy' on this security screen.  Click it, and you will see the next screen asking you to create a new policy.

We are going to create a simple policy that lets us perform any IoT action against any device.  This is rather all encompassing, and in a production environment, you may want to restrict the policy down a little, but for the sake of this exercise, we will enable all actions to all devices under this policy:

In the 'Action' field, enter 'iot:*' for all IoT actions, and in the 'Reource ARN' field, enter '*' for all devices and topics etc.  Don't forget to tick the 'Allow' button below, then click 'Create'.

You now have a thing, a set of security certificates for the thing, and a policy to control the certificates against the thing.  Hopefully the policy should be attached to the certificates that you just created.  If not, you will have to manually attach the policy to the certificates.  To do this, click on 'Security' on the left hand menu, then click on 'Certificates', then click on the certificate that you just created.

Click on the 'Policies' on the left hand side of the certificate screen.

If you see 'There are no policies attached to this certificate', then you need to attach it by clicking on the 'Actions' drop down on the top right, then choosing 'Attach Policy' from the drop down menu.

Simply tick the policy you want to attach to this certificate, then click 'Attach'.

You may want to now click on 'Things' on the left hand menu to ensure that the thing you created is attached to the certificate as well.

To ensure all your ducks are in a row:-

The 'thing' -> needs to have -> Security Certificate(s) -> needs to be attached to -> A Policy

Actually, there is one more factor that we want to note on here which is important for later.  Go ahead and click on the 'Registry' then 'Things' on the IoT dashboard.  Choose the thing you just created, and then click on the 'Interact' option on the left hand menu that pops up.

Notice under HTTPS, there is a REST API endpoint shown.  Copy this information down and keep it aside for now, because we will need it in our Node.js code later to specify which host we want to talk to.  This host address is unique for each Amazon IoT account, so keep it safe and under wraps.

Also note on this screen that there are some special Amazon IoT reserved topics that can be used to update or read the shadow copy of your IoT thing.  We won't really be using these in this project, but it is handy to know for more complex projects where you might have several devices talking to each other, and also devices that may go on and offline a lot.  The 'shadow' feature allows you to still 'talk' to those devices even though they are offline or unavailable, and lets them sync up later.  Very powerful stuff.

Next, we will take a break from the IoT section, and set up a DynamoDB table to collect the data from the Onion.


Amazon DynamoDB

Click on 'Services' then 'Dynamo DB' under 'Databases'.

Click on 'Create Table'.

Give the table a meaningful name.  Important: Give the partition key the name of 'id' and set it to a 'String' type.  Tick the box that says 'Add sort key' and give the key a name of 'timestamp' and set it to a 'Number' type.  This is very important, and you cannot change it later, so please ensure your setup looks like above.

Tip: Once you have created your DynamoDB table, copy down the "Amazon Resource Name (ARN)" on the bottom of the table information screen (circled in red above).  You will need this bit of information later when creating a security policy for reading data from this table to show on the web site chart.

Ok, now that you have a table being created, you can go back to the Amazon IoT Dashboard again for the next step ('Services' then 'AWS IoT' in your console top menu).  What we will do now is create a 'Rule' in IoT which will handball any data coming in to a certain topic across to DynamoDB to store in a data file.

Tip: When you transmit data to an IoT thing using MQTT, you generally post the data to a 'topic'.  The topic can be anything you like.  Amazon IoT has some reserved topic names that do certain things, but you can post MQTT packets to any topic name you make up on the spot.  Your devices can also listen on a particular topic for data coming back from Amazon etc.  MQTT is really quite a nice, powerful and simple way to interact with IoT devices and servers.

In the IoT dashboard, click on 'Rules' on the left hand side, then click the 'Create' button.

The 'Name' can be something distinctive that you make up.  Add a 'Description' to help you remember what this rule does.  For the 'SQL Version', just choose '2016-03-23' which is the latest one at time of writing.

Below that, on 'Attribute', type in '*' because we will be selecting ALL fields sent to us.  In the 'Topic Filter', type in 'temp-humidity/+'.  This is the topic name that we will be listening out for.  You can call it anything you like.  We include a '/+' at the end of the topic name because we can add extra data after this, and we want the query to treat this extra data as a 'wildcard' and still select it. (Note: We will be adding the device name to the end of the topic as an identifier (e.g. temp-humidity/Omega-XXXX).  This way, if we later have multiple temperature/humidity sensors, we can identify each one via a different topic suffix, but still get all the data from all sensors sent to DynamoDB).

ERRATA: The screenshot above shows 'temp-humidity' in the 'Topic Filter' field, but it should actually be 'temp-humidity/+'.

Leave the 'Condition' blank.

Now below this, you will see an 'Add Action' button.  Click this, and choose 'Insert a message into a DynamoDB table'.

As you can see, there is a myriad of other things you can do, including on forwarding the data to another IoT device.  But for now, we will just focus on writing the data and finishing there.  Click on the 'Configure Action' button at the bottom of the screen.

Choose the DynamoDB table we just created from the drop down 'Table Name'.  The 'Hash Key' should be 'id', of type 'STRING', and in the 'Hash Key Value', enter '${topic()}'.  It means we will be storing the topic name as the main key.

The 'Range Key' should be 'timestamp' with a type of 'NUMBER'.  The 'Range Key Value' should be '${timestamp()}'.  This will place the contents of the packet timestamp in this field.

Lastly, in the the 'Write Message Data To This Column', I enter in 'payload'.  This is the name of the data column that contains the object with the JSON data packet sent from the device.  You can call this column anything you like, but I like to call it 'payload' or 'iotdata' or similar so that I know all the packet information is stored under here.

One more thing to do, for security purposes, we have to set up an IAM role which will allow us to add data to the DynamoDB table.  This is actually quite easy to do from here.  Click the 'Create A New Role' button.

Give the role a meaningful name, then click 'Create A New Role'.  A new button will show up with the text next to it saying 'Give AWS IoT permission to send a message to the selected resource'.  Click on the 'Update Role' button.

Important: You must click the 'Update Role' button to set the privileges properly.  Once completed, click the 'Update' button.

Thats It!  We are pretty much done as far as Amazon IoT and DynamoDB setup.  It was quite a rigmarole wasn't it?  Lots of steps that have to be done in a certain order.  But the good news is that once this is done, the rest of the project is quite easy, AND FUN!

Installing Certificates

Oh, Wait - One more slightly tedious step to do.  Remember those 4 certificates we downloaded much earlier?  Now is the time we need to put them to good use (well, 3 out of the 4 at least).  We need to copy these certificates to the Onion.  I found it easiest to copy and paste the text contents of the certificate over onto the '/home/certs' folder on the Onion.  I simply used the web interface editor to create the files in the '/home/certs' folder and paste the contents of the certificate I downloaded.  The three certificates I needed (and which I copied and renamed) are:

  • VeriSign-Class3-Public-Primary-Certification-Authority-G5.pem -> /home/certs/rootCA.pem
  • x1234abcd56ef-certificate.pem.crt -> /home/certs/certificate.pem
  • x1234abcd56ef-private.pem.key -> /home/certs/private.key

As you can see, I shortened down the file name for ease of handling, and put them all into one folder for easy access from my Node.js app too.  That's it.  Once done, you don't have to muck about with certificates any more.

Exactly where you store the certificates or what you call them is not important, you just need to know the details later when writing the Node.js script.


Writing Code

Ok, back to the Omega Onion now, where we will write the code to grab information from the HDT22 and transmit it to Amazon IoT.  This is where the rubber hits the road.  Using nano, or the web editor on the Onion, create a file called '/home/app.js' and enter the following:

var util = require('util');
var spawn = require('child_process').spawn;
var execFile = require('child_process').execFile;

var mosqparam = [
'--cafile', '/home/certs/rootCA.pem',
'--cert', '/home/certs/certificate.pem',
'--key', '/home/certs/private.key',
'-h', '',
'-p', '8883'

setInterval(function() {
execFile('/root/checkHumidity/bin/checkHumidity', ['6','DHT22'], function(error, stdout, stderr) {
var dataArray = stdout.split("\n");
var logDate = new Date()
var postData = {
datetime: logDate.toISOString(),
temperature: parseFloat(dataArray[1]),
humidity: parseFloat(dataArray[0])
// publish to main data queue (for DynamoDB)
execFile('mosquitto_pub', mosqparam.concat('-t', 'temp-humidity/Omega-XXXX', '-m', JSON.stringify(postData)), function(error, stdout, stderr) {
// published
// publish to device shadow
var shadowPayload = {
state: {
desired: {
datetime: logDate.toISOString(),
temperature: parseFloat(dataArray[1]),
humidity: parseFloat(dataArray[0])
execFile('mosquitto_pub', mosqparam.concat('-t','$aws/things/Omega-XXXX/shadow/update', '-m', JSON.stringify(shadowPayload)), function(error, stdout, stderr) {
// shadow update done
}, 1000 * 60 * 5);


NOTE: I have obfuscated the name of the Omega device here, as well as the Amazon IoT host name for my own security.  You will need to ensure that the host name and device name correspond to your own setups above.

Lets go through this code section by section.  At the top are the 'require' statements for the Node.js modules we need.  Luckily no NPM installs needed here, as the modules we want are part of the core Node.js install.

Then we define an array called 'mosqparam'.  These are actually the parameters that we need to pass to the mosquitto command line each time - mainly so it know the MQTT host (-h) and port (-p) it will be talking to, and where to find the 3 certificates that we downloaded from Amazon IoT and copied across earlier.

Tip: If your application fails to run, it is almost certain that the certificate files either cannot be found, or else they have been corrupted during download or copying across to the Onion.  The mosquitto error messages are cryptic at best, and a certificate error doesn't always present to obviously.  Take care with this bit.

After this is the meat of the code.  We are basically running a function within a javascript setInterval() function which fires once every five minutes.

What this function does is run an execFile() to execute the checkHumidity app that we downloaded and installed earlier.  It then takes the two lines that the app returns and splits them by the carriage return (\n) to form an array with two elements.  We then create a postData object which contains the temperature, the humidity, and the log time as an ISO8601 string.

Then we transmit that postData object to Amazon IoT by calling execFile() on the 'mosquitto_pub' command that we also installed earlier as part of the mosquitto package.  mosquitto_pub basically stands for 'MQTT Publish', and it will send the message (-m) consisting of the postData object translated to JSON, to the topic (-t) 'temp-humidity/Omega-XXXX'.

That is really all we need to do, however, in the code above, I've done something else.  Straight after publishing the data packet to the 'temp-humidity/Omega-XXXX' topic, I did a second publish to the '$aws/things/Omega-XXXX/shadow/update' topic as well, with essentially the same data, but with some extra object wrappers around it in shadowPayload.

Why did I do this?  Well, the '$aws/things/Omega-XXXX/shadow/update' topic is actually a special Amazon IoT topic which stores the data packet within the 'shadow' copy of the Omega-XXXX thing in the cloud.  That means that later on, I can use another software system from anywhere in the world to interrogate the Omega-XXXX shadow in the cloud to see what the latest data readings are.

If for any reason the Onion goes offline or the home internet goes down, I can interrogate the shadow copy to see what and when the last reading was.  I don't need to set this up, but for future plans I have, I thought it would be a good idea.

Enough talk - save the above file, lets run the code

cd /home
node app.js

You won't see anything on the screen, but in the background, every 5 minutes, the Omega Onion will read the data and transmit it to.  Hopefully it is working.

If it doesn't work - things to check are the location and validity of the certificate file.  Also check that your home or work firewall isn't blocking port 8883 which is the port MQTT uses to communicate with Amazon IoT.

Now ideally we want our Node.js app to run as a service on the Omega Onion.  That way, if the device reboots or loses power and comes back online, the app will auto start and keep logging data regardless.  Fortunately, this is easy as well.

Using nano, create a script file called /etc/init.d/iotapp and save the following in it:

#!/bin/sh /etc/rc.common
# Auto start iot app script


start() {
echo start
service_start /usr/bin/node /home/app.js &

stop() {
echo stop
service_stop /usr/bin/node /home/app.js

restart() {

Save the file, then make it executable:

chmod +x /etc/init.d/iotapp

Now register it to auto-run:

/etc/init.d/iotapp enable

Done.  The service should start at bootup, and you can start/stop it anytime from the command line via:

/etc/init.d/iotapp stop


/etc/init.d/iotapp start


If you go back to your DynamoDB dashboard, click on the table you created, you should be able to see the packet data being sent and updated every 5 or so minutes.

Also, if you go to the Amazon IoT dashboard and click on 'Registry' then 'Things' and then choose your IoT thing, then click on 'Activity', you should see a history of activity from the physical board to the online thing.  You can click on each activity line to show the data being sent.

Hopefully everything is working out for you here.  Feel free to adjust the setInterval() timing to one minute or so, just so you don't have to wait so long to see if data is being streamed.  In fact, tweak the interval setting to whatever you like to suit your own needs.  5 minutes may be too short a span for some, or it may be too long for others.  The value is in the very last line of the Node.js code:

    1000 (milliseconds) x 60 (seconds in a minute) x 5 (minutes)


Set up the Website

Final stretch now.  Funny to think that all that hard work we did above is essentially invisible.  But this bit here is what we, as the end user, will see and interact with.

What we will do here is to set up a simple web site which will read the last 24 hours of data from our DynamoDB table we created above, and display it in a nice Chart.js line chart showing us the temperature and humidity plot over that time.  The web site itself is a simple Bootstrap/jQuery based one, with a single HTML file and a single .js file with our script to create the charts.

Since I am using Amazon for nearly everything else, I decided to use Amazon S3 to host my website.  You don't have to do this, but it is an incredibly cheap and effective way to quickly throw up a static site.

A bigger problem would be how to read DynamoDB data within a javascript code block on a web page.  Doing everything client side means that my Amazon credentials will have to be exposed on a publicly accessible platform - meaning anyone can grab it and use it in their own code.

Most knowledgebase articles I scanned suggested using Amazon's Cognito service 'Identity Pools' to set up authentication, but setting up identity pools is another long and painful process.  I was fatigued after doing all the above set up by now, so opted for the quick solution of setting up a 'throwaway' Amazon IAM user with just read only privileges on my DynamoDB data table.  This is not 'best practice', but I figured for a non critical app like this (I don't really care who can see the temperature setting in my guitar room - it's not like a private video or security feed) that it would do for what I needed.

Additionally, I have CloudWatch alarms set up on my DynamoDB tables so if I see excessively high read rates from nefarious users, I can easily revoke the IAM credentials or shut down the table access.


Amazon IAM

To set up a throwaway IAM, go to the 'Services' menu in your AWS console and choose 'IAM' under 'Security, Identity and Compliance'.

Click on the 'Users' option on the menu down the left, then click 'Create' to create a new IAM user:

Give the user any name you like, but ensure you tick the box saying 'Programmatic Access'.  Then click the 'Next: Permissions' button.

On the next screen, click on the third image at the top which says 'Attach existing policies directly'.  Then click on the button that says 'Create Policy'.

Note: This will open the Create Policy screen on a new browser tab.

On the Create Policy screen, click the 'Select' button on the LAST option, i.e. 'Create Your Own Policy'.

Enter in the policy details as below.  Ensure that the 'Resource' line contains the ARN of your DynamoDB table like we found out above.

Here is the policy that you can cut and paste into the editor yourself (after substituting your DynamoDB ARN in it):

"Version": "2012-10-17",
"Statement": [
"Sid": "ReadOnlyIoTDataTable",
"Effect": "Allow",
"Action": [
"Resource": "<insert your DynamoDB ARN here>"

Once done, click on 'Validate Policy' to ensure everything is OK, the 'Create Policy'.

Now go back to the previous browser tab where you were creating the user, and click the 'Refresh' button.  You should now see the policy you just create on the list. (Hint: You can do a search on the policy name).  Tick it.

Click 'Next' to go to the review screen, then click 'Create User'.

Copy down the key and click on 'Show' to show the secret.  Copy both of these and keep them safely aside.  We will need them in our web site script below.

Ok, now lets set up the Amazon S3 bucket to host our website.


Amazon S3

Click on 'Service' on your AWS Console, then choose 'S3' under 'Storage'.  You should see a list of buckets if you have used S3 before.  Click on 'Create Bucket' on the top left to create a new bucket to host your website.

Give your bucket a meaningful name.

Tip: The bucket name will be part of your website name that you will need to type in your browser, so it helps to make it easy to remember and if it gives a hint as to what it does.

Once the bucket is created, select it from the list of buckets by clicking on the name.  Your bucket is obviously empty for now.

Click on the 'Properties' button on the top right, then expand the 'Permissions' section.  You will see your own username as a full access user.

Click on the 'Add more permissions' button here, and choose 'Everyone' from the drop down, and tick the 'List' checkbox.  This will give all public users the ability to see the contents of this bucket (i.e. your web page).  Click on 'Save' to save these permissions.

Next, expand the section below that says 'Static Website Hosting'.

Click on the radio button which says 'Enable website hosting', and enter in 'index.html' in the 'Index Document' field.

Click 'Save'.

That is about it - this is the minimum required to set up a website on S3.  You can come back later to include an error page filename and set up logging etc., but this is all we need for now.

NOTE: Copy down the 'Endpoint' link on this page (circled in red).  This will be the website address you need to type into your browser bar later to get access to the web page we will be setting up.

Tip: You can use Amazon Route53 to set up a more user friendly name for your website, but we won't go into that in this already lengthy tutorial.  There are plenty of resources on Google which go into that in detail.

The Code

Now for the web site code itself.  Use your favourite editor to create this index.html file:

<!DOCTYPE html>
<html lang="en">
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">

<title>Home Monitoring App</title>

<!-- Bootstrap core CSS -->
<link rel="stylesheet" href="">

<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src=""></script>
<script src=""></script>



<div class="container">
<br />
<div class="jumbotron text-center">
<h1>Temperature & Humidity Dashboard</h1>
<p class="lead">Guitar Storage Room</p>

<div class="row">

<div class="col-md-6">

<canvas id="temperaturegraph" class="inner cover" width="500" height="320"></canvas>

<br />
<div class="panel panel-default">
<div class="panel-body">
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-danger">High</span>&nbsp;
<div class="col-sm-9">
<span id="t-high" class="text-muted">(n/a)</span>
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-success">Low</span>&nbsp;
<div class="col-sm-9">
<span id="t-low" class="text-muted">(n/a)</span>

<div class="col-md-6">

<canvas id="humiditygraph" class="inner cover" width="500" height="320"></canvas>

<br />
<div class="panel panel-default">
<div class="panel-body">
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-danger">High</span>&nbsp;
<div class="col-sm-9">
<span id="h-high" class="text-muted">(n/a)</span>
<div class="row">
<div class="col-sm-3 text-right">
<span class="label label-success">Low</span>&nbsp;
<div class="col-sm-9">
<span id="h-low" class="text-muted">(n/a)</span>

<div class="row">
<div class="col-md-12">
<p class="text-center">5 minute feed from home sensors for the past 24 hours.</p>

<footer class="footer">
<pclass="text-center">Copyright &copy; Devan Sabaratnam - Blaze Business Software Pty Ltd</p>

</div> <!-- /container -->

<script src=""></script>
<script src=""></script>
<script src=""></script>
<script src="refresh.js"></script>

Nothing magical here - just a simple HTML page using bootstrap constructs to place the chart canvas elements on the page in two columns.  We are loading all script and css goodies using external CDN links for Bootstrap, jQuery, Amazon SDK and Chart.js etc. so we don't have to clutter up our web server with extra .js and .css files.

Next we code up the script, in a file called refresh.js:

AWS.config.region = 'us-east-1';
AWS.config.credentials = new AWS.Credentials('AKIZBYNOTREALPQCRTVQ', 'FYu9Jksl/aThIsNoT/ArEaL+K3yTR8fjpLkKg');

var dynamodb = new AWS.DynamoDB();
var datumVal = new Date() - 86400000;
var params = { 
TableName: 'iot-temperature-humidity',
KeyConditionExpression: '#id = :iottopic and #ts >= :datum',
ExpressionAttributeNames: {
"#id": "id",
"#ts": "timestamp"
ExpressionAttributeValues: {
":iottopic": { "S" : "temp-humidity/Omega-XXXX"},
":datum": { "N" : datumVal.toString()}

/* Create the context for applying the chart to the HTML canvas */
var tctx = $("#temperaturegraph").get(0).getContext("2d");
var hctx = $("#humiditygraph").get(0).getContext("2d");

/* Set the options for our chart */
var options = { 
responsive: true,
showLines: true,
scales: {
xAxes: [{
display: false
yAxes: [{
ticks: {

/* Set the inital data */
var tinit = {
labels: [],
datasets: [
label: "Temperature °C",
backgroundColor: 'rgba(204,229,255,0.5)',
borderColor: 'rgba(153,204,255,0.75)',
data: []

var hinit = {
labels: [],
datasets: [
label: "Humidity %",
backgroundColor: 'rgba(229,204,255,0.5)',
borderColor: 'rgba(204,153,255,0.75)',
data: []

var temperaturegraph = new Chart.Line(tctx, {data: tinit, options: options});
var humiditygraph = new Chart.Line(hctx, {data: hinit, options: options});

$(function() {
$.ajaxSetup({ cache: false });
setInterval(getData, 300000);

/* Makes a scan of the DynamoDB table to set a data object for the chart */
function getData() {
dynamodb.query(params, function(err, data) {
if (err) {
return null;
} else {

// placeholders for the data arrays
var temperatureValues = [];
var humidityValues = [];
var labelValues = [];

// placeholders for the data read
var temperatureRead = 0.0;
var humidityRead = 0.0;
var timeRead = "";

// placeholders for the high/low markers
var temperatureHigh = -999.0;
var humidityHigh = -999.0;
var temperatureLow = 999.0;
var humidityLow = 999.0;
var temperatureHighTime = "";
var temperatureLowTime = "";
var humidityHighTime = "";
var humidityLowTime = "";

for (var i in data['Items']) {
// read the values from the dynamodb JSON packet
temperatureRead = parseFloat(data['Items'][i]['payload']['M']['temperature']['N']);
humidityRead = parseFloat(data['Items'][i]['payload']['M']['humidity']['N']);
timeRead = new Date(data['Items'][i]['payload']['M']['datetime']['S']);

// check the read values for high/low watermarks
if (temperatureRead < temperatureLow) {
temperatureLow = temperatureRead;
temperatureLowTime = timeRead;
if (temperatureRead > temperatureHigh) {
temperatureHigh = temperatureRead;
temperatureHighTime = timeRead;
if (humidityRead < humidityLow) {
humidityLow = humidityRead;
humidityLowTime = timeRead;
if (humidityRead > humidityHigh) {
humidityHigh = humidityRead;
humidityHighTime = timeRead;

// append the read data to the data arrays

// set the chart object data and label arrays = labelValues;[0].data = temperatureValues; = labelValues;[0].data = humidityValues;

// redraw the graph canvas

// update the high/low watermark sections
$('#t-high').text(Number(temperatureHigh).toFixed(2).toString() + '°C at ' + temperatureHighTime);
$('#t-low').text(Number(temperatureLow).toFixed(2).toString() + '°C at ' + temperatureLowTime);
$('#h-high').text(Number(humidityHigh).toFixed(2).toString() + '% at ' + humidityHighTime);
$('#h-low').text(Number(humidityLow).toFixed(2).toString() + '% at ' + humidityLowTime);


Lets go through this script in detail.

The first two lines set up the Amazon AWS SDK.  We need to specify the AWS region, then we need to specify the credentials we will be using for interrogating the DynamoDB table.  Copy and paste in the Key and Secret that you created in the previous section here.

The next bit is initialising the AWS DynamoDB object in 'dynamodb'.  The 'datumVal' variable contains a timestamp that is 24 hours before the current date/time.  This will be used in the DynamoDB query to only select data rows in the prior 24 hour period.

The 'params' object contains the parameters that will be sent to the dynamodb object to select the table, and run a query upon it.  I am not a fan of NoSQL, mainly because querying data is a huge pain, and this proves it.  The next 10 lines are purely setting up an expression to look at the ID and the Timestamp columns in the DynamoDB table, and pull our all ID's which contain 'temp-humidity/Omega-XXX' (remember, the ID is actually the topic, including the thing identifier), and a timestamp that is greater than, or equal to the 'datum' that we set before.

Next, on line 20 and 21 we set up the context placeholders for the two charts.  Simple Chart.js stuff here.

Lines 23 to 62 we are simply setting up some default placeholders for the charts, including the colours of the lines and shading etc.  I am also using some xAxes and yAxes properties to turn off the X-axis labels and to ensure the Y-Axis starts at a zero base.  You can omit these if you want the graph to look more dynamic (or cluttered! :)).

Lines 64 and 65 is just initialising the Chart.js objects with the above options and context.

Next comes a generic function that calls the getData() function every five minutes.  You can change the setInterval() parameter from 300000 (1000 milliseconds per second x 60 seconds per minute x 5 minutes) to whatever you like.  But seeing as we are only pushing temperature and humidity data from our Onion to Amazon IoT every 5 minutes as well, anything less than a 5 minute check is just overkill.  Feel free to tailor these numbers to suit your own purposes though.

Line 70 to the end is just the getData() function itself.  All this does is run a query against the 'dynamodb' object using the 'params' we supplied for the query parameters etc.

The results are returned in the data['Items'] array.

Lines 81 to 84 just sets up the placeholder arrays for the values and labels to be used on the charts.

Lines 86 to 99 I have set up purely for checking the highest and lowest settings for the temperature and humidity reading.  You can elect not to do this, but I wanted to show on the main page the highs/lows for the preceding 24 hour period.  I am simply initialising some empty variable here to use in the following loop.

Lines 101 to 129 is just a simple loop that runs through the returned data['Items'] array and parses the keys into the variables and arrays I defined above.  I am also comparing the read values against the highs and lows.  For every array element I read, I check to see if the highs are higher than the last highest value, and the lows lower that the last value(s), and update the highs/lows accordingly.

Then, after the loop, lines 132 to 136 update the Chart.js chart data and labels with what we have read in the loop.

Lines 139 and 140 force the charts to redraw themselves.  Lines 143 to 146 use jQuery AJAX calls to update the High and Low sections on the main web page with the readings and times.

That is it!

Save these two files, then upload them to your bucket by going back to your Amazon S3 Bucket screen and clicking on the 'Actions' button and choosing 'Upload Files'.

Drag and drop the two files onto the upload screen, but don't start it yet!  Click on the 'Set Details >' button at the bottom, then immediately click on 'Set Permissions >'.

Make sure you tick the box that says 'Make everything public', otherwise nobody can see your index.html file!

Now click 'Start Upload' to begin uploading the two files.

You are DONE!  Can you believe it??  We are done.  Finished.  Completed.

If you type in the website address we noted down earlier into your browser, you should be able to see a beautiful dashboard showing the collected data from your Onion Omega device.


If you made it this far, then congratulations on achieving this marathon.  It took me several days to nut the above settings out, and many false starts and frustrations along with it.  I am hoping that by documenting what eventually worked for me, I can reduce your stress and wasted time and set you on the path to IoT development a lot quicker and easier.

Next steps for me are to set up a battery power source for my Omega Onion, so it doesn't have to be connected to my computer, and can sit on a shelf somewhere in my guitar storage room and still report to me.

Let me know if you find this tutorial useful, and please also let me know what you guys have built with IoT - it is a fascinating field!




Building a 'Nosedive' rating app in a couple of hours

This month, the family and I have been watching the NetFlix series "Black Mirror", catching up older seasons and devouring Season 3.  One of our favourite episodes was 'Nosedive', and so I don't give out spoiler alerts here, I won't go into the plot line, but nevertheless, we were all fascinated by the 'Rating' app that everyone used on the show.

So much so, that my wife, the kids and I all started 'air gesturing' each other the 'swipe and flick' routine as if we were using the app to rate each other throughout the day.

This made me think - how about if we actually had a dummy app that we could use?  I noticed that NetFlix had created a demo site on the internet to promote the show, so I (ahem) "borrowed" some of the assets like the background, star graphics and the rating sounds, and mocked up a small dummy 'Nosedive' app in a couple of hours that I had spare.

Now I can really annoy the kids.  "Didn't do your homework?, ONE star for you!" (dew dew dew dew dew).  Wife brings me a nice hot cup of tea? "Five stars, my dear..." (dinga ding ding ding DING!).

I never intended to make money from this little side project - I just installed it on our phones using my developer account.  I am releasing the source code on GitHub in case any others want to take things further.

Please note that this is nothing like the actual app - there is no facial recognition (although I have been playing around with the Microsoft Face API to see if I can do something there).  There is not aggregate rating for people, and there is no central database that things are stored in (though I have thought about using Firebase to store rating data in the cloud).  It is purely a gimmick - although there is no reason that anyone can't take this starting code and build all that on.  Have at it! :)

Building the App

The app itself is built using the Ionic framework, which I have been using for over a year now, and really love.  It facilitates creating a hybrid app quickly and easily that can be used on iOS and Android devices.  No need for Swift or Objective-C, it is all done in javascript and HTML/CSS.

Nothing too tricky about this app - it is a simple one page application, which is the rating page.  As I mentioned all the assets, including the background swirling pink video, the rating star graphics and the notification sounds, were all downloaded from the NetFlix promotional site I mentioned above.  That is 90% of the work right there.

The rest was just implementing the swipe gestures to set the star level, and then the flick gesture to 'send' the rating and play the sounds.

Setting the ratings was one area that stumped me for a while.  Initially, I was playing around with the $ionicGesture event handler, and trying to trap left and right swipes including the distance swiped and the swipe velocity to try and calculate the star rating to give.  That all turned out to be extremely tricky and difficult, so in the end, I ended up using a typical programmers shortcut - in that I cheated! :)

I ended up placing an HTML range slider control on the screen, just under the stars.  I then made this slider element invisible, and used CSS to reverse offset the slider to that it lay just on top of the stars themselves.

This way, if anyone put their finger on the stars and moved left or right, it effectively moved the hidden slider left and right.  The upside is very accurate tracking of where the user lifted their finger, as the rating value would correspond with the star where they lifted their finger.

The downside is that on some devices, the slider will not move unless the user starts their finger on the current star (e.g. if you wanted to go from 2 stars to 5 stars, you would have to place your finger on the second start, then slide to the fifth star.  If you just tapped the fifth star or started on the third star to slide up, the slider would not move).  Most users I tested this on (well, my wife and kids) seemed to naturally start at the current star anyway, so I figured I could get away with this.  At least it worked with minimal (read: NO) coding required.

The last thing so do was to implement the Cordova Native Audio plugin to generate the sounds.  This was pretty trivial to do, and was only a few lines of code.  I had to capture the swipe up gesture to trigger the 'send' sound at first, then wait one second, then play the 'rating' sound depending on the rating (one to five) that the user had chosen. Check the code for details.

Yes, yes, yes, the pedants among you might say well the sending phone only plays the 'send' sound and the rating sound is played on the receiver's phone, but for our app, we aren't really 'sending' the rating anywhere, and are just using it as a too to tease or annoy others, so the rating playing on our own phone after a delay is enough to let the other person know exactly what we think of them (as long as they are in hearing range in a relatively quiet environment).

Anyhow, I will let others feel free to build upon the code base and see what they come up with.  I won't be releasing this app on the App Stores or anything, as I don't want to push things too far and be hit with a copyright violation from NetFlix!  Have fun.


Getting heard on the internet

 Picture courtesy of National Geographic

Picture courtesy of National Geographic

Someone once told me that the ideal size for a human community is something in the order of 500 people.  Apparently that was the average size of a village or community back in the day, and it meant that every person pretty much knew everyone else.  Neighbours would know each other and look out for one another when they were sick or in need.  Anyone who tried to misbehave or act out was generally known, and quickly brought back into line by the collective, because everyone had a stake in the wellbeing and survival of the community.

Yesterday I was introduced to a new 'game' online at (Tip: Visit it on your mobile browser).  It is a beautifully designed, simple site which lets you make paper planes, stamp them with your location and 'launch' them out into the internet.  You can also 'catch' planes that others have launched, look at where they have been by the stamps on it, then stamp it with your own location and relaunch it back into the virtual skies again.

It is fascinating to see where some planes have been in their travels, and also exciting to see where you planes will end up.

A deceptively simple game, but it was all the more engrossing to me, as it took me back to my childhood loves of building, discovering and connecting with others.

When I first signed on to the game yesterday, there were around 100,000 planes flying around this virtual world.  I launched a few, and caught many.  Most of the ones I caught were filled with stamps, showing the number of people who had caught it in the past.

But today when I went back online, there were around 400,000 planes flying around.  Quadruple what it was yesterday.  I caught a few planes, but noted that nearly all of them had only one stamp - from the originator who built and launched the plane in the first place.

Somewhere along the line, the balance tipped.  When I started, I felt an instant connectedness to the others playing the game, because the planes I launched had a good chance of being caught, and the planes that I caught had been stamped by so many others.

But now, any planes I launched into the ether would likely just buzz endlessly around the world, lonely and ignored in the huge stream of lost and lonely paper planes.  That connectedness that I once experienced is now severely diluted in the increasing noise.

I can only imagine that the players who started in this game when there were only a few hundred planes flying around would have a different argument - that they were catching the same planes over and over again, and had little chance of seeing a plane from the other side of the world.

I feel exactly the same when it comes to social media platforms like Twitter, Instagram or Medium.

The early days of the platform means that what you say is easily visible to other early adopters, and the feedback and conversations you have will be meaningful and rich.  But over time, the increasing crowds initially is exciting, as you perceive your audience and reach growing, but there comes a time when your uniqueness and individuality (and sense of self importance) within that ecosystem is simply diluted away to something generic.

That is why, in my latest startup SaaS app, I am not going for large numbers of users, but rather a quality community.  We recently removed our free plans to further accomplish this goal.  I am proud when asked, to say that my users number in the hundreds, instead of six or seven figure mark.  At this stage I still know virtually all my users by name, and support tickets can still stay personalised and friendly.  My users are not statistics on a spreadsheet.  They are part of my village.

As for the paper planes game, I have changed my thinking there too.  I no longer make and launch planes into the already crowded skies.  Nowadays I am happy to simply capture other people planes, stamp them and send them on.  I now relish catching planes with only a single stamp on them, because I feel that when I stamp them and send them on, in effect I am saying "This lonely plane matters, and I hope it has a great journey".  Somewhere in the world, someone will check their stats on their launched planes, and I hope it gives them a brief spark of connection with a guy in remote Australia.

20 years of Blaze...

The 1st of September marks a major milestone in my life.  It will mean that I have been running my company, Blaze Business Software Pty Ltd for 20 years now.  Two decades.  It seems almost unbelievable to me at times.

Back in September 1996, I had only been married for a month, I was about to turn 30, and I decided to start a software consultancy business out of my bedroom.  Thus began the rollercoaster, including getting an office in the Cullen Bay area of Darwin, growing the team to at one stage around 16 people, and then now coming full circle to just my wife and I working from a home office again in a 'lifestyle' business.

So many changes in the IT industry at that time.  When I started Blaze, the internet was just hitting mainstream here in Australia, and everything was still dial up.  We were one of the first offices to get an ISDN line into our office, and I clearly remember setting up a small Windows 98 server in the back which was running some sort of DOS mail daemon so that we could have individual email addresses for every employee.  Something that was so rare back then.

We were also one of the first companies locally to upgrade to Microsoft Exchange and implement ActiveSync.  I clearly remember proudly showing off how I could read and reply to emails on my Palm Pilot in real time to all my clients.  Nowadays that is just an expected thing, but back then I was pleased that we were pushing the envelope and being cutting edge.

Lots of nice memories, such as being the finalist in the Telstra Small Business Awards up here in 1998 I think.  Lots of other small awards and achievements.  But there were also some really tough times, and many days where I didn't know whether I wanted to close the doors forever and go raise sheep in the Italian mountains.

But through all that, I still wake up every day and look forward to doing the work I do.  I am always grateful to have met so many wonderful people through my business.  From clients (many of whom I still work with 20+ years later), to employees who have become close friends, to colleagues and competitors and everyone who has walked through the doors or called in the past 2 decades.  Thank You.

Proving that it is never too late to be a 'startup', this year I have embarked on a whole new reboot of the business, as we become a SaaS company providing subscription based business software.  Given that I will be turning 50 this year, I don't know if I will have the energy to keep on with the consulting and support role for many more years, and I am looking forward to setting up a passive income source from a modern, web based subscription platform.

Just another step in our long and interesting journey.  Hope to see you all along the way...