Click here to Skip to main content
15,900,725 members
Articles / Internet of Things

Zukuno | A personal assistant for the IoT

Rate me:
Please Sign up or sign in to vote.
4.80/5 (4 votes)
7 Aug 2016CPOL19 min read 18.9K   348   16  
Learn how to setup a Raspberry Pi as a wifi router, track public transport with GTFS, install & use Django, and combine CMUsphinx, Webview, and Microsoft's speech API in an Android app. We'll also control Sonos speakers, build a security camera, and consume some API's (including the CodeProject API)

Check out a brief demo of voice-enabled sound control:
Image 1

Image 2

[Fig 1.] .GIF screencast of my GUI while I was prototyping ^.^

Table of Contents


Hello and welcome to my latest CodeProject article!

The last time I wrote an article here on CodeProject (it was October last year), I was still in high-school. Since then, I've since started studying Mathematics and IT at university, and have been generally too busy to write articles :-( However, after I finished my first semester of study a few weeks ago, I knew had a few weeks break before my second semester began, so I considered writing another CodeProject article. Then I checked Code Project and saw that there was an IoT competition on! Feeling inspired by the release of Google Home and Amazon Alexa, I thought I'd give voice-enabled home automation a try - and thus this project was born :cool:

In this article, I'm going to show you how you too can build your own personal assistant. We'll start by configuring a Raspberry Pi as a wifi router. After that, we will install Apache & Django, prototype our UI, and build our own wrappers around several API's. We will then build an Android app that consumes the API's we built, and add voice recognition and synthesis to our app by combining CMUsphinx and Microsoft's Bing Voice API. I will then show you how to use another Raspberry Pi and a USB webcam to create a real-time security feed. We'll conclude the article with a demo of how to control your SONOS sound system with Python's SoCo library, and take a look at future expansion possiblities for this project.

Why home voice automation? I think voice automation is a cool area of research because it can be applied in a wide variety of ways. For example, the personal assistant that I have created (Zukuno) could easily be optimized to give people with disablities (quadriplegia, blindness, etc) greater control of their surroundings through their voice. It could also be extended in industrial environments to automate equipment and monitor IoT services. The innate flexibility of voice technologies made it somewhat challenging to choose a category for this article.

Anyway, enough introduction. Let's get coding!

What we'll need

To build our personal assistant, we will need:

  • Two Raspberry Pi's (one for the master wifi router, one for controlling the security camera)
  • Two 8GB or greater micro SD cards
  • A USB speaker
  • A USB camera
  • Power supplies
  • An available ethernet port (for the master wifi router)
  • An Android-based smartphone or tablet (an old one will do. We use this to display our GUI and handle microphone input)
  • An HDMI cable and an HDMI enabled screen (not essential, but very useful when troubleshooting your rPi's)
  • If you're not using the latest model Raspberry Pi's (model 3), then you will need Wifi-USB adaptors to complete this project.

Useful to have:

  • An intermediate amount of programming experience
  • Lots of patience
  • A few weekends of spare time

Bird's Eye View

Image 3

[Fig 2.] A bird's eye view of our project.

The core component of our entire system is a single Raspberry Pi 3. Connected to my main home network via ethernet port, the Raspberry Pi 3 is both a wifi router and an Apache/Django server, and provides a central API for Zukuno-optimized IoT devices to access. By utilizing Android tablets placed in stragetic locations throughout the target space of a Zukuno installation, Zukuno is able to listen for commands and display visual output. A USB speaker attached to the Raspberry Pi provides Zukuno with a centralized audio ouput, although the tablets/devices could be configured to provide localized output if needed.

The advantage of using wall-mounted tablets is that they provide adequate microphone quality, superb audio-visual output, as well as a local environment on which to perform CPU-intensive tasks such as voice recognition. If voice recognition was performed solely on the Pi, the CPU load on the Pi would dramatically increase with every room addded to the system, eventually rendering the entire system unworkable.

Getting Started with the Raspberry Pi

Setting it up

First created in May 2009, the Raspberry Pi is now in its third official iteration. Until May this year, I had never bought a Raspberry Pi, mainly because they seemed to limited. However, when the Raspberry Pi 3 arrived with inbuilt WiFi, I was finally motivated to purchase one. I'm glad I eventually did, because otherwise I would never have been able to write this article :-)

Image 4

[Fig 3.] My Raspberry Pi 3, inside a case, with a ruler for scale

I highly recommend you get a case for your Raspberry Pi since they are very small and easily damaged by large objects.

Downloading & burning a Raspbian image

This is fairly straightfoward to do. In fact, the simplicity of this process is one of the major selling points of the Raspberry Pi (and, for that matter, other IoT devices): if you ever mess up your installation, it is very easy to remove the SD card, re-image it, and start your Pi from scratch with a completely clean OS.

Head over to the official Raspbian download page to download Raspbian.

Once you've downloaded your Raspbian image, insert a 8gb or greater MicroSD card into your machine. Use an adapter if necessary. Then, unzip the image file, and use the Win32DiskImager to burn the image to your SD card.

Booting for the first time

To boot your Pi for the first time, you should make sure you have a micro-USB charger capable of producing at least 5A output. I have personally found that the Pi generally runs fine on slightly less power, for example that generated by a phone charger, although it will sometimes brown out while running CPU-intensive tasks on a low power supply.

Begin by inserting your imaged SD card into the card slot underneath the Raspberry Pi. Optionally, connect your Pi to the internet by plugging an active ethernet cable into the ethernet port. If you have an HDMI-enabled monitor, grab an HDMI cable and plug your Pi into your monitor. Once all of that is done, insert the micro-USB charger into the power connector on the Pi. Your Raspberry Pi will automatically boot, and a stream of text output will flood your screen.

Image 5

[Fig 4.] My Raspberry Pi 3 booting.

Changing your Password

The default login for your Raspberry Pi is "pi" (username, no quotes) and "raspberry" (password, no quotes).

I highly recommend changing your password to something memorable and secure before continuing any further. A lot of people skip this step, thinking "I'll be right", but I know of several people who have learnt the hard way why this step is important. Just do it.

To change your password, enter this command:

$ sudo passwd

You will be prompted to enter your current password (raspberry), after that you will be able to choose a new password.

Once you're done, try entering

$ startx

to play with the GUI that comes with Raspbian. You'll need to plug a USB mouse into the Pi to use the GUI. Take a little while to familarize yourself with the operating system.

Image 6

[Fig 5.] The Raspbian GUI

Creating a WiFi access point

One of the coolest things you can do with your Pi without writing any code is configuring it to act as a WiFi router. It's a great way to whet your IoT appetite, and is a crucial step towards building our personal assistant.

Installing hostapd and dnsmasq

There are a number of tutorials online on how to set up your Raspberry Pi as a wifi server. Due to the differing times at which they were written, many of them conflicted with each other, so I thought I'd share the method that worked for me on the Raspberry Pi 3, running Raspbian Jesse (using the inbuilt WiFi hardware).

Start by cracking open a terminal and installing these two packages: hostapd and dnsmasq

$ sudo apt-get update
$ sudo apt-get install hostapd dnsmasq


Open up /etc/network/interfaces for editing:

$ sudo nano /etc/network/interfaces

Edit it so that it looks something like this:

Image 7

[Fig 6.] Editing /etc/network/interfaces

After that, create a new onfiguration file at /etc/hostapd/hostapd.conf:

$ sudo nano /etc/hostapd/hostapd.conf

Enter this content:

ssid=Pi3-AP #name of your new wifi hotspot

Make sure you choose a wifi password and replace {YOUR DESIRED WIFI PASSWORD GOES HERE} with the correct value. I know it's terrible security practice to store passwords in plain text, but unfortunately this is the way hostapd works. Hostapd is an open-source project, so if you're concerned about this, you can always fork their code and implement proper password hashing.

Now run these commands in order:

$ sudo service dhcpcd restart
$ sudo ifdown wlan0; sudo ifup wlan0
$ sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf

At this point you should be able use your smartphone/laptop to preview your Wifi network. However, you won't be able to connect or access the internet, because we haven't configure dnsmasq yet.

$ sudo nano /etc/dnsmasq.conf

Then change the contents of that file to this:

server= # Use Google's DNS servers.

Now, to enable IPv4 forwarding, open up /etc/sysctl.conf:

$ sudo nano /etc/sysctl.conf

Uncomment the line that says net.ipv4.ip_forward=1 by removing the # from the start of the line. Ctrl-X, y, Enter to save.

Now enter the following commands:

$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  
$ sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  
$ sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
$ sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"
$ sudo nano /etc/rc.local

Just above the line exit 0, add the following line of text:

iptables-restore < /etc/iptables.ipv4.nat

If you have any trouble with the above steps, try googling, or reference the articles I used to write this:


Check your connection

Reboot your Pi with:

$ sudo reboot

If all went well, once your Pi has finished loading, you should be able to connect other devices to your new WiFi network!

Image 8

[Fig 7.] iPhone connected to my Pi3-AP network

Connecting via PuTTY

Now that you have set up your network, connect your PC to it. If you don't already have PuTTY, download it from an official site, then run it.

Image 9

[Fig 8.] PuTTY

Enter in the details for your Raspberry Pi (IP address:, Port: 22, that's it!), then click "Open".

Image 10

[Fig 9.] Using PuTTY to SSH into our new Raspberry Pi WiFi router

Now that we can access a terminal remotely from our desktop, we don't really need to use the Raspbian GUI anymore. If you wish, you can disconnect your Pi from your monitor and tuck your Pi safely away in a nearby corner where it can access power and ethernet. Beginners should note that they do not use their WiFi password to access their Pi through PuTTY, instead they should use the password to their Raspbian account that they set with passwd earlier in this article.

Setting up our Workflow


Git is an immensely popular file management and versioning system, often associated with the independent service GitHub, although many other Git-based services exist, and many people run their own Git servers.

For the rest of this article, we will be using Git to push and deploy our code to our Raspberry Pi.

Visual Studio Code

Visual Studio Code is a brilliant modern text editor that I highly recommend using for this article. There are alternative text editors, which you can use if you wish, but I am a firm believer in learning to adapt to all sorts of text editors so that you can pick the best tool for each task with an open mind. For example, although I recommend VS code for this article, I wouldn't use VS code to edit a 2GB .csv file. Something like Vim, Nano, or perhaps even Notepad++ on Windows would be better suited to that task.

Creating & using repositories

To create a new repository in Git on your Raspberry Pi, SSH into your Pi with PuTTY, and create a new directory called "GitTest".

$ mkdir GitTest

Then enter that directory.

$ cd GitTest

Now, initialize a new git repo:

$ git init
$ git add .
$ git commit -m "First Commit"

To push to a remote repository, such as one on GitHub, do the following:

$ git remote add origin URL-of-your-remote-repository
$ git remote -v
$ git push -u origin master

After you've made local changes, you can simply push them to a repo through the following:

$ git push

Likewise, remote changes can be pulled down to a local repo:

$ git pull

Building our Backend

Zukuno is split into two parts: a UI served dynamically through Apache to participating tablets, and a Django-powered API consumed by the UI that makes everything tick. In this section, we will focus on building a Zukuno's backend API.

Installing & Configuring Apache

Start by installing the Apache package:

$ sudo apt-get install update
$ sudo apt-get install apache

After installation completes, start the Apache service:

$ sudo service apache start

You can now test your server by opening Chrome on your computer and navigating to (the IP address of your Raspberry Pi). You should be greeted by the Apache welcome page. Our WiFi router is now a server too :cool:

Files served through the Apache server are located at /var/www/ on your Raspberry Pi. Navigate to this location and create and push a git repository to GitHub, then clone it to your Desktop. Then, whenever you make changes to your desktop repo, you can push them to GitHub with git push and load them into your Raspberry Pi with git pull.

Installing & Configuring Django

We are slowly starting to get to the more advanced parts of this build. Installing Django alongside Apache is somewhat tricky. I highly recommend following the official documentation for doing so, perhaps setting up a virtual environment to work in, etc :-)

You can test your Django isntallation using the following command:

$ python -m django --version

Asssuming you've got Django covered, let's create a new folder for our project:

$ cd ../
$ mkdir django
$ cd django
$ mkdir zukuno
$ cd zukuno

Now let's create a new Django project:

$ django-admin startproject zukuno

Now that we've done that, let's ensure that we own all the files that we just created:

$ sudo chown -R $USER:$USER .

We will now have a directory with the following structure:


Let's start our server. Navigate into the folder containing, then run the following command:

sudo python runserver

The on the end tells django to accept external connections, using port 8000.

Image 11

[Fig 10.] Running Django on our Raspberry Pi

Now, if you load on a device connected to the Raspberry Pi's network, you will see something like Fig 11 (taken from a very early build of our Android app, which we'll discuss later).

Image 12

[Fig 11.] The default Django welcome page, running inside an early build of the Android app we'll cover later

To close the server gracefully, push Ctrl-C while the terminal is active. Unfortunately, I have found that sometimes Django fails to exit completely, blocking port 8000 such that the server cannot be restarted. If this happens to you, use this command to force-clear the port so you can restart the server:

$ sudo fuser -k 8000/tcp

Consuming a GTFS feed

Introduction to GTFS

According to Google's official documentation, GTFS Realtime (General Transit Feed Specification) is a "feed specification that allows public transportation agencies to provide realtime updates about their fleet to application developers".

GTFS, as a general protocol, also requires several static files for reference whenever real-time data is unavailable.

When building this project, I referenced GTFS data from the following sources:


Installing Google's Python wrapper

Image 13

[Fig 12.] Installing gtfs-realtime-bindings on Windows.

Fortunately for us, we don't need to spend ages building a feed parser for this protocol. Instead, we can use Google's pre-built python package.

Installing it takes literally one line:

$ pip install --upgrade gtfs-realtime-bindings

Trying it out

Create a new file called in your home directory on your Raspberry Pi:

$ nano

Image 14

[Fig 13.] Using the GTFS library with the code below

Let's try something simple. Add the following code to the file:

from google.transit import gtfs_realtime_pb2
import urllib

feed = gtfs_realtime_pb2.FeedMessage()
response = urllib.urlopen('')
vehicles = []
for entity in feed.entity:
    if not in vehicles:

print "There are " + str(len(vehicles)) + " buses on the road"

Save and exit. Then run the file using

$ python

The last line of the generated output will be something along the lines of this:

There are 974 buses on the road

We have just collected the unique vehicle identifiers and used their total to calculate how many buses are active right now :cool:

Our code

To get started, stop the Django process if it's running, then navigate to the folder containing in your Django project. Once you're there, run the following:

$ python startapp gtfs

This will create a new "app", gtfs, which is like a submodule of a Django project. Open gtfs/ in nano and change it to the following:

from django.shortcuts import render
from google.transit import gtfs_realtime_pb2
import urllib

# Create your views here.

from django.http import HttpResponse, JsonResponse

def index(request):
    feed = gtfs_realtime_pb2.FeedMessage()
    response = urllib.urlopen('')
    trip_updates = []
    for entity in feed.entity:
        if entity.HasField('trip_update'):
            for stop_update in entity.trip_update.stop_time_update:
                if stop_update.stop_id == '<the stop you wish to use>':
                    delay = stop_update.arrival.delay
                    append = " ahead"
                    if delay < 0:
                        delay = delay * -1;
                        append = " late"
                    if delay < 60:
                        delay = "30 seconds" + append
                        delay = str(delay / 60) + " minutes" + append
                    trip_update = {
                        'route_id': entity.trip_update.trip.route_id,
                        'time': stop_update.arrival.time,
                        'delay': delay,
    #json_output = jsonpickle.encode(trip_updates, unpicklable=False)
    return JsonResponse(trip_updates, safe=False)

To find the stop outside your house, refer to the static files available as part of the GTFS standard. You should be able to look up your street name and find a matching stop.

The code above is basically a JSON simplifier. It takes the huge JSON response from the real-time feed, extracts the tiny bit of information that we want, and returns it in a different JSON structure.

Future Improvements

The code samples I have written above are only a very simple demonstration of what is possible here.  For example, I could track this data for a few months, then apply machine-learning algorithms in an attempt to understand when transport delays happen, why they happen, and how one can avoid them. However, I only have so much time to work on this project, so I have to leave this as-is for now.

Using Microsoft's TTS engine

Why this engine?

I chose to use Microsoft's Speech API because:

  • It has a reasonable free quota per month
  • It is adequately accurate
  • It was easier to use than other alternatives that I tried

Getting an API access key

To get an API keyset to use when compiling my sample code, or when building your own apps, head over to

Trying it out

First, we define some key parameters:

clientId = "<insert>"
clientSecret = "<insert>"
ttsHost = ""
params = urllib.urlencode({'grant_type': 'client_credentials', 'client_id': clientId, 'client_secret': clientSecret, 'scope': ttsHost})
headers = {"Content-type": "application/x-www-form-urlencoded"}            
AccessTokenHost = ""
path = "/token/issueToken"

After that, we contact Microsoft's servers to request an access token:

conn = httplib.HTTPSConnection(AccessTokenHost)
conn.request("POST", path, params, headers)
response = conn.getresponse()
data =
accesstoken = data.decode("UTF-8")
access_token = ddata['access_token']

Now that we have are access token, we can query their servers with our text to be dictated:

body = "<speak version='1.0' xml:lang='en-us'><voice xml:lang='en-us' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)'>" + text_to_dictate + "</voice></speak>"
headers = {"Content-type": "application/ssml+xml", 
            "X-Microsoft-OutputFormat": "riff-16khz-16bit-mono-pcm", 
            "Authorization": "Bearer " + access_token, 
            "X-Search-AppId": "99aab5c6fd784c0dac57d1fe01059c3c", #"07D3234E49CE426DAA29772419F436CA", 
            "X-Search-ClientID": "099a0afb96044b298daeb6ce9cc131ce", #"1ECFAE91408841A480F00935DC390960", 
            "User-Agent": "TTSForPython"}
conn = httplib.HTTPSConnection("")
conn.request("POST", "/synthesize", body, headers)
response = conn.getresponse()
data =

Finally, we save the recording to our local disk:

file = open("/home/pi/response.wav", "w")

Our code

The code that I used in my application is almost identical to the code above, except I wrapped it in a function and created a class around it, so that my Django view app could call it whenever necessary:

from voice import tts

response = "I've paused the music."
tts.getDictation(response); # Use Microsoft Speech API to dictate the input
player = subprocess.Popen(["omxplayer", "-o", "local", "/home/pi/response.wav"], stdin=subprocess.PIPE

Building our Frontend

My Approach to Frontend

I decided to build the frontend of Zukuno as a webapp. This had several positive and negative effects. One positive effect of this approach was that it was very easy to push updates to the GUI - all I had to do was update the files on the Raspberry Pi, and the changes would automatically flow to every device accessing those files. Unfortunately, one negative effect of this approach was that I was severely limited in terms of design and responsiveness, mainly due to the extremely poor performance of Android's WebView component.

Using Android Studio

If you want to build modern Android apps today, I recommend using Android Studio. Since the first stable release came out in December 2014, it has steadily been improving - and today it is by far the best Android IDE out there for developing native Android apps.
Image 15

[Fig 14.] Android Studio

Using WebView

WebView is broken

As I mentioned earlier, the our app uses Android's WebView component to display the Zukuno UI. Unfortunately, WebView is largely broken and unmaintained, and as a result can sometimes suffer performance issues and behave unexpectedly. In the next few sections, I'm going to explain a few workarounds I came up with.

Enabling touch events

One bizarre type of behaviour that I encountered while using WebView was that it was almost impossible to recieve onClick events, or even onTouch events, from Javascript code within the webapp. I still have not found a clean workaround for this; although I did find the following hackish solution:

main_view.setOnTouchListener(new View.OnTouchListener() {
    public boolean onTouch(View v, MotionEvent event) {
        if (event.getPointerCount() > 1) {
            return true;

        switch (event.getAction()) {
            case MotionEvent.ACTION_DOWN: {
                m_downX = event.getX();
                // send the click event
                main_view.loadUrl("javascript:$(document.elementFromPoint(" + event.getX() + ", " + event.getY() + ")).click()");

            case MotionEvent.ACTION_MOVE:
            case MotionEvent.ACTION_CANCEL:
            case MotionEvent.ACTION_UP: {
                // prevent horizontal scrolling
                event.setLocation(m_downX, event.getY());

        return false;

Fixing performance

Out-of-box CSS animations and transforms are horrible in WebView. I found that these changes helped a little. Animations are still far from perfect, but at least they are now watchable.

main_view.setLayerType(View.LAYER_TYPE_HARDWARE, null);

Communicating through Javascript

One redeeming quality of the WebView component is how easily it does support Javascript communication. Although in other platforms I've used (CefSharp, C# WebBrowser contorl, etc), Javascript has been a pain to get working, in Android's WebView it just works.

Java -> Javascript


Javascript -> Java


public class HelloWorld {
    Context ctx;

    HelloWorld(Context c) {
        ctx = c;

    public void fooBar() {
        //do something

//In onCreate()
main_view.addJavascriptInterface(new HelloWorld(this), "HelloWorldPortal");

Then, in Javascript:


Building our own GUI within WebView


The structure of our HTML is as follows:

    <link href=",300,700" rel="stylesheet" type="text/css">
    <link rel="stylesheet" href="style.css" />
    <div id="mic">
        <img src="img/mic.png" />
        <div class="spinner">
            <div class="bounce1"></div>
            <div class="bounce2"></div>
            <div class="bounce3"></div>
    <div id="rec-text">
        <div class="circle-waiter"></div>
    <div id="response"></div>
    <div id="shade"></div>
    <!--<div id="header"></div>-->
    <div id="space">
        <div id="left-space">
            <img src="">
        <div id="right-space"></div>
    <div id="times">
        <h1>Upcoming Buses</h1>
        <div id="sched">
            <div class="entry">
                <span class="service">P205</span>
                <span class="time"><span>ETA</span>6:00am</span>
                <span class="data">
                        <span class="late">3 minutes late</span>
                        <span class="details">to City, George St.</span>
... lots of entries ...
    <script src="jquery.min.js"></script>
    <script src="main.js"></script>

Consuming the API's we built

Playing activation/deactivation sound effects is very easy:

$.get(""); //Activated!
$.get(""); //De-activated!

Sending other information is done like so:

$.get("" + rec, function(result) {
    setTimeout(function() {
    }, 2000);

The following code updates the live bus feed:

function getTripData() {
    is_collecting = true;
    console.log("Collecting data");
    $.getJSON("", function(result){
        is_collecting = false;
        console.log("Parsing data");

        data = result;
        $.each(result, function(i, field){
            $('#sched').append(generateTripSchema(field.route_id, formatAMPM(field.time), field.delay));

function generateTripSchema(route_id, eta, delay) {
    var template = `
            <div class="entry">
                <span class="service">` + route_id + `</span>
                <span class="time"><span>ETA</span>` + eta + `</span>
                <span class="data">
                        <span class="late">` + delay + `</span>
                        <span class="details">to City, George St.</span>
    return template;

[Expanding our system] Using the SoCo Python Library

Using SoCo to control my home's Sonos system was amazingly simple.


pip install soco


import soco

# Pause everything
zone_list = list(
for zone in zone_list:

# Play Mozart
zone_list = list(
for zone in zone_list:

That's it! All I needed to do was call these functions in the voice Django app whenever a corresponding intent was detected.

[Expanding our system] Setting up a security camera

One last extension I made to this project was adding a "security camera".

To do this, I used a second Raspberry Pi, a USB camera, a powered USB hub, and a package called Motion.

Installing motion is easy:

$ sudo apt-get install motion

Once it has finished installing, open up /etc/motion.conf:

$ sudo nano /etc/motion.conf

Then hunt down the section referring to localhost connections and set everything to off, so that you can access the stream from outside the host Pi. After that, reboot the Pi, and you should be good to go.

Image 16

[Fig 16.] Accessing our security camera through the browser


Image 17

[Fig 17.] Close to the current version of my app

Thanks for reading til the end!

Zukuno is still very much a work in progress, and what I have presented here is only a skeleton of what it is capable of. Over the next few months, I'd like to gradually add more and more API's into Zukuno's codebase, so that it can spread its wings and start fully living up to it's goal of being the personal assistant for the Internet of Things. I learnt a lot by building what I've done so far, and I hope that this article inspires people to try building their own apps for the IoT. I personally believe that voice-enabled apps like Zukuno have enormous potential in helping disabled people (sufferring fom quadriplegia, blindness, etc) take control of their surroundings and achieve greater independence.

In conclusion, I hope you enjoyed this article :-) Please leave any comments you may have in the forum below. I'm keen to hear your feedback.


  • 7/8/16 First published


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
Australia Australia
I'm a university student with a passion for stunning UI design and flawless code.

I fell in love with computers when I was 10 years old, and today I am fluent in C#, Python, Java, Javascript, HTML5, and CSS3. I know a little MATLAB, PHP, and SQL.

In my spare time, I build all sorts of cool things, like this Rubik's Cube Robot.

Away from the keyboard, I enjoy quality time with friends and family, as well as reading, painting, playing the piano, teaching chess & karate, and volunteering within my community.

I have learnt that success comes through hard work and dedication, and that giving back to the same people and communities that have helped me is both important and very rewarding.

Follow me on GitHub

Comments and Discussions

-- There are no messages in this forum --