Airport AODB goes NoSQL (Part 2)

airport

Earlier I embarked on the journey to create an AODB based on a NoSQL datamodel, moving away from a relational model and discuss its benefits. As a quick refresher about What’s an AODB ? for the new reader, the elevator-pitch style version describing an AODB:

AODB – Aiport Operational Database
An AODB system is one of the core IT systems to support the airport ground operations, it integrates with various systems in the heterogeneous airport IT landscape by processing data from airline seasonal flights schedules, flight plan and slot management, ground movement from Radar, air movement from ATC, and other sources. It serves as CDM (Collaborative Decision Making) platform for the various parties and stakeholders forming the airport community, from airport operators, airlines, groundhandling agents, authorities to ATC (Air Traffic Control) and others.
It handles seasonal and operational flights by providing planning, real-time and historical data, supports resource management for facilities, equipment and human resource and feeding information to public via FIDS and other external links. The below diagram shows an exemplary common orchestration of systems where the AODB is embedded at the core.

airport_systems_20181030

Now lets have a look at the typical data layout and relation of flight data entities and attributes. These are the common business entities and a relational model is the traditional approach to design it. We need to apply a rather high level of normalization to avoid redundant data, but the relations (typical 1:N) across the model have an impact on the performance of the DB. This can be counterbalanced by tuning, indexing and more powerful hardware underneath. Building SQL statements with joins across several tables becomes challenging (hard to create) and might cause inefficient reads of tables (full table scan). In comparison with a NoSQL design we have a document approach, one document (like a index card in the analog world) would contain all relevant data (ignoring the redundancy problem for now).
At the end of this exercise we have to ask the question: Is NoSQL the right tool for an AODB ? (We will revisit this question later on)

objects_20181030b

I like to elaborate the redundancy problem on one particular case:
A flight is operated with a specific aircraft (registration, tail number) on a certain date. The related information (AC Type, seats, owner, lease, etc) we retrieve from the relational table (containing all aircrafts in the system,) quite the standard scenario. The problem starts when we keep operational data long term (years) for auditing/research/statistical purpose. It is quite common registrations get transferred due to sale or scrapping of aircraft (find a sample here). Using the relational model with an aircraft registration table that only carries current registrations we would end up looking at the wrong information for a historical flight that operated on the previous aircraft with the same registration. A solution would be introducing the concept of validity for certain entities ,which again adds to the complexity.

The main problem is not solved, we should not replicate or mimic a relational model with NoSQL. Keeping the data redundant will increase the data volume but we would have one document with all relevant information. One usecase which is appealing for the document approach is creating a final snapshot of the flight in an archive like repository. The design question we have to answer, what data or details of the operations lifecyle (schedule, planning, operation, post-operation) we want to keep in the flight “document” ?

As an academic exercise, lets get started and create the most basic (primitive) version of a flight document in JSON format and look at all its weakness to start to evolve to improved versions of it.

{
  "flight": "AA123D",
  "org": "AKA",
  "des": "FRA",
  "service": "J",
  "actype": "A350",
  "position": "Z19",
  "gate": "A5",
  "baggagebelt": "09",
  "scheddep": "2017-11-23T19:35:00.000Z",
  "schedarr": "2016-11-24T13:15:00.000Z",
  "estimatearr": "2016-11-24T13:55:00.000Z",
  "estimatedep": "2016-11-23T19:39:00.000Z",
  "onblock": "2016-11-23T13:35:00.000Z",
  "offblock": "2016-11-23T19:31:00.000Z",
  "landed": "2016-11-23T13:27:00.000Z",
  "airborne": "2016-11-23T19:39:00.000Z",
  "pax": "128",
  "via": [
    "ABR",
    "ACL"
  ],
  "codeshare": [
    "LH123",
    "TG123",
    "AF123"
  ]
}

What is good about this entry level model ? Not too much other than highlighting the benefit to have all info in one document.

Lets look at the problems, at least the highlights. There is quite a number of attributes missing (eg. registration) but here the main flaws:

  • There is no clear concept of the flight as entity. Is it a segment or a complete journey ?
  • No naming convention, more or less random abbreviation for eg. timings.
  • No proper key identifier.No separation of airline code, flight number and suffix, missing schedule departure date (as key).
  • Resource should be an array of objects. Multiple resources with different timings might be in use.
    Same applies to any pax or cargo/load data.
  • Representing VIA and CS information like this might be good enough for a FIDS system but for a mature model we need to break down the whole entity into segments.
  • No links which provide dependencies to other segments, codeshares, arrival or departures.
  • Milestones (timings) should be an array too.
  • No audit information. (Might not in the scope of our model though.)
  • No unique (technical) identifier beyond flight keys.

We will elaborate and finetune in upcoming posts. Stay tuned.

Disclaimer: This discussion, datamodel and application is for study purpose solely. It does not reflect or replicate any existing commercial product.

Image: Creative Commons, DeGolyer Library, Southern Methodist University on The Commons, “DC-3 Aircraft at Houston Municipal Airport, Eastern Airlines”

Airports – Ready for the Cloud ?

Unlike airlines which are used to distributed operations and having systems like a reservation system hosted centrally at their hub (originating in the times of mainframe servers with access to this crucial part of their operations only via a remote connection), airports still tend to follow a much more traditional approach. Airport operations are local and and not geographically distributed like airlines, over decades they established local data-centres on-premise and created a mindset of full control only available with the server and IT services right in their basement. Along come big IT departments with teams of server-, network-, db-admins and support.

St. Albert at Dublin Airport, circa 1950 (CC by National LIbrary of Irland)

St. Albert at Dublin Airport, circa 1950 (CC by National Library of Ireland)

This paradigm is slowly changing, due to the fact airports need to cut cost and operate more efficiently. In parallel we can observe an attitude change at management levels, becoming more open to solutions which are outside of their physical control, they buy in the concept of SAAS, consuming a service on a subscription base with a well defined SLA and availability. This shift started with less crucial back-office systems, like Email-Server and document repositories, and now moving on towards more operation critical systems. Slow adopters or companies restricted by policies or governance issues start moving towards a private cloud, eventually cutting down on operations costs. Airports start to understand internet availability in the year 2015 reached a commodity level like water and electricity, they start to adopt even public cloud hosted services.
Zero tolerance systems like ATC or something less life critical like a FIDS system will remain certainly a local solution, but AODB’s are moving into the cloud. All the vendors jumped on the bandwagon and offer some kind of cloud solution, be in a private cloud offering (with the vendor) or even deployed to a public cloud. The potential in this approach is the opportunity to offer an AODB solution at a fraction of a price of traditional AODB projects. Deploying to a public cloud, without any local requirements other than an internet connection and a browser, a small airport can start using an AODB without any investment, maybe at a price as cheap as 3.000,- Euro monthly subscription. Assuming a smaller airport (less than 1 million PAX/year or something like 25..50 commercial flights a day plus GA) is operating with simple requirements (flight plan import and management, operational flight tracking,  billing, Type B and AFTN message interface).

To answer the questions: Yes, they are ready.
But it depends on the IT strategy of medium to big airports or the restricted budget and need of smaller airports.

Let’s see who is serving the long tail in the airport market !

Running EC2 spot instances

or ‘ Save ultimately more money with AWS’

I use EC2 instances for test, development, demo and also for deployment to production. Amazon offers different types of instances, ranging from a micro instance (613 MB Ram and 2 CPU units) to a full fledge Cluster Compute Quadruple Extra Large Instance (60GB RAM and 33 CPU units). Of course a different price and paid per hour usage, available anytime.

All on demand Linux instances (Singapore):

  • Micro instance: U$ 0.02 per hour
  • Medium instance: U$ 0.34 per hour
  • High Mem/CPU instance: U$ 2.024 per hour

On top of this there are 3 different categories of instances (in contractual terms)

Some price comparison for a m1.Large instance we use for testing (7,5GB RAM and 4 CPU units)

  • On Demand (any time without any contractual obligations, we are using them currently)
    $0.340 per Hour > 1 month U$ 244.80 (fulltime 24h)
  • Reserved Instance (1 year term, one time payment U$ 276.00)
    U$ 0.196 per Hour > 1 month U$ 141.12 (3 months: U$ 699.36 vs on-demand U$ 734.40, 12 months: U$ 1969.44 vs. on-demand U$ 2937.60 = ~30% savings )
  • Spot Instance (depends on availability, you bid on a price range, if price exceeds your limit your instance shuts down)
    U$ 0.04 per Hour (as of December 5th 2012) > 1 month U$ 28.80

The spot instance, almost at 10% of the on-demand price, is extremely attractive and I am using it as test server.
Not suitable for production or demo purpose though.

The reserved instance starts to break even after 3 months full-time usage !

In order not to pay for instances running idle (at night, weekend) they auto-shutdown and the user can start them in a self provision fashion (for test, demo or training).

Interesting enough, the price fluctuation is very different in the AWS regions. Lets look at a m1.large instance type in the Ireleand  versus Singapore datacentre.

AWS Ireland

AWS Ireland

 

AWS Singapore

AWS Singapore

Obviously Singapore customers are not into this bidding concept, it remains permanently at 4cts while for Ireland the price jumps up to several Dollars !

More information at:

The cloud is not infinite

Amazon AWS seems to be very popular at the moment. I could not start my instance, due to not enough capacity !

We currently do not have sufficient m1.small capacity in the Availability Zone you requested (ap-southeast-1a). Our system will be working on provisioning additional capacity. You can currently get m1.small capacity by not specifying an Availability Zone in your request or choosing ap-southeast-1b.

Finite Cloud

We currently do not have sufficient m1.small capacity in the Availability Zone you requested (ap-southeast-1a). Our system will be working on provisioning additional capacity. You can currently get m1.small capacity by not specifying an Availability Zone in your request or choosing ap-southeast-1b.

Creating an Ubuntu 10.04 AMI using a local VMWare

I am using Amazon EC2 and S3 now more often, and our architecture, development and deployment partially relies on Amazon. For example, we save artifacts from our Build-Server on S3 and deploy the application for trial and testing in the EC2 cloud. The level of control you have over your instances and buckets is just great, and new features (like VPC, SNS) are added frequently. The API allows me to remote control our infrastructure without using the browser.

No one can say, you wouldnt find a fitting Linux distribution on EC2. It seems there are myriads of AMI’s and almost all popular Linux distros are available for you to get started. But being a control-freak I prefer a slightly different approach. We create a virtual appliance in-house (our products runs out-of-the-box) and use the appliance for local development and tests. I maintain reference appliances knowing exactly which kernel and which packages are running. For large number deployment is essential that all instances are identical. Unfortunately there is no straightforward solution to “upload” your vmdk to EC2 (or any of the few other cloud/IAAS providers) that allow upload and expect it running (due to a couple of technical facts in the background that usually are transparent to a cloud user, eg. XEN specific kernels, etc).

Collecting some inputs and tutorials from various sources I tried to create my local UBUNTU 10.04 LTS server on VMWare Workstation (Player) and get it running as EC2 instance.

I summarize the process here.

Warning: I still face a major issue with the instance (created from the uploaded AMI) which can be started, but it is not possible to connect via SSH. I will updated this blog as soon I (hopefully with your help) find the solution.

Pre-Requirements:

  • VMWare Workstation/Player or VirtualBox
    It does not matter which tool you use because during the process with create a bundle “inside” the running server. We are not converting a vmdk file or similar (which is also possible)
    For this tutorial I assume you have it downloaded and installed (There is a 30 day trial version of VMWare Workstation available with some more features than the player).
  • Ubuntu 10.04 Server LTS (or any other version, recommend 8.04 or later)
    You installed the basic server as Virtual Machine and can login as root. The installation process is simple enough and not covered here.
  • AMAZON AWS account
    You have an active AWS account with access to S3 and EC2.

Tutorial Part A (getting keys and certificates from Amazon AWS)

  • Login into your AWS account and navigate to Account |Security Credentials
  • Take note of your Access Key and Secret Access Key
    Take note of your Account Number (at the top right under your name)
    (one keypair should be created by default when you create an AWS account) 

    AWS Access Keys

  • Create and Download X.509 Certificates
    Please read the warning: The private key can only be created and downloaded 1 time ! Download both to your desktop.
    1 Certificate File: cert-{some_random_key}.pem
    1 Private Key File: pk-{some_random_key}.pem 

    X.509 Certificates

  • Create a bucket in S3
    Please note the bucket name must be unique worldwide. You can use something like “mycompanyname.images” or similar.
    By default the bucket is private. 

    Create S3 bucket

Tutorial Part B (Preparation of the Ubuntu Server)
I assume you already installed a Virtual Machine with Ubuntu Server 10.04 (without any extra packages). All steps performed as root user (via sudo or you “change” to root with sudo -i)

  • Add a drive to the instance

    Virtual Machine

    Edit virtual machine settings | Add.. | Hard Disk | Create new virtual disk | SCSI | 10GB | Store as single file

    Virtual Machine Settings

  • Power on the virtual machine

    Virtual Machine

  • Mount the additional harddisk
    mkdir /disk2
    mkfs -t ext2 /dev/sdb
    mount /dev/sdb /disk2

    root@ubuntu:~# mkdir /disk2
    root@ubuntu:~# mkfs -t ext2 /dev/sdb
    mke2fs 1.41.11 (14-Mar-2010)
    /dev/sdb is entire device, not just one partition!
    Proceed anyway? (y,n) y
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2684354560
    80 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
     32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Writing inode tables: done
    Writing superblocks and filesystem accounting information: done
    
    This filesystem will be automatically checked every 25 mounts or
    180 days, whichever comes first.  Use tune2fs -c or -i to override.
    root@ubuntu:~# mount /dev/sdb /disk2
    root@ubuntu:~# cd /disk2
    root@ubuntu:/disk2# ls
    lost+found
    root@ubuntu:/disk2#
    
  • Install SSH server
    Otherwise we cant access the instance later. It is also easier to work with ssh session connect to our local instance.
    apt-get install openssh-server
  • Install FTP Server
    We need to transfer files to our instance.
    apt-get install vsftpd
    Remember to configure /etc/vsftpd.conf
    write_enable=YES
    local_enable=YES
    and restart vsftpd
    service vsftpd restart
  • Disable the firewall
    ufw disable
    We configure the firewall with Amazon console
  • Install the EC2 AMI Tools
    apt-get install ec2-ami-tools
  • Transfer the 2 key files to /tmp
    with ftp from your local machine/desktop. In /tmp it will not be bundled with your AMI later.
  • Delete network info
    rm /etc/udev/rules.d/70-persistent-net.rules
  • Install ec2 kernel
    Make sure the universe entry in /etc/apt/sources.list is enabled.
    apt-get update
    apt-get install linux-image-ec2
    Do not restart !

    root@ubuntu:~# apt-get install linux-image-ec2
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following extra packages will be installed:
     linux-image-2.6.32-309-ec2
    Suggested packages:
     fdutils linux-ec2-doc-2.6.32 linux-ec2-source-2.6.32
    The following NEW packages will be installed:
     linux-image-2.6.32-309-ec2 linux-image-ec2
    0 upgraded, 2 newly installed, 0 to remove and 30 not upgraded.
    Need to get 19.2MB of archives.
    After this operation, 57.6MB of additional disk space will be used.
    Do you want to continue [Y/n]? y
    Get:1 http://sg.archive.ubuntu.com/ubuntu/ lucid-updates/main linux-image-2.6.32-309-ec2 2.6.32-309.18 [19.2MB]
    Get:2 http://sg.archive.ubuntu.com/ubuntu/ lucid-updates/main linux-image-ec2 2.6.32.309.10 [3,276B]
    Fetched 19.2MB in 51s (375kB/s)
    Selecting previously deselected package linux-image-2.6.32-309-ec2.
    (Reading database ... 28138 files and directories currently installed.)
    Unpacking linux-image-2.6.32-309-ec2 (from .../linux-image-2.6.32-309-ec2_2.6.32-309.18_i386.deb) ...
    Done.
    Selecting previously deselected package linux-image-ec2.
    Unpacking linux-image-ec2 (from .../linux-image-ec2_2.6.32.309.10_i386.deb) ...
    Setting up linux-image-2.6.32-309-ec2 (2.6.32-309.18) ...
    Running depmod.
    update-initramfs: Generating /boot/initrd.img-2.6.32-309-ec2
    Running postinst hook script /usr/sbin/update-grub.
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-2.6.32-309-ec2
    Found initrd image: /boot/initrd.img-2.6.32-309-ec2
    Found linux image: /boot/vmlinuz-2.6.32-24-generic
    Found initrd image: /boot/initrd.img-2.6.32-24-generic
    Found linux image: /boot/vmlinuz-2.6.32-21-generic
    Found initrd image: /boot/initrd.img-2.6.32-21-generic
    Found memtest86+ image: /boot/memtest86+.bin
    done
    
    Setting up linux-image-ec2 (2.6.32.309.10) ...
    

    Resulting boot directory

    /boot

    Do not reboot. The new default kernel is the ec2 kernel, the virtual machine will NOT boot anymore !

  • Adjust default kernel in grub
    Edit your /boot/grub/grub.cfg (This is not good practice because any update-grub trashes your manual changes!)

    ...
    ### BEGIN /etc/grub.d/00_header ###
    if [ -s $prefix/grubenv ]; then
     load_env
    fi
    set default="2"
    if [ ${prev_saved_entry} ]; then
     set saved_entry=${prev_saved_entry}
     save_env saved_entry
    ...
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Ubuntu, with Linux 2.6.32-309-ec2' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     linux    /boot/vmlinuz-2.6.32-309-ec2 root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro find_preseed=/preseed.cfg noprompt  quiet splash
     initrd    /boot/initrd.img-2.6.32-309-ec2
    }
    menuentry 'Ubuntu, with Linux 2.6.32-309-ec2 (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     echo    'Loading Linux 2.6.32-309-ec2 ...'
     linux    /boot/vmlinuz-2.6.32-309-ec2 root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro single find_preseed=/preseed.cfg noprompt
     echo    'Loading initial ramdisk ...'
     initrd    /boot/initrd.img-2.6.32-309-ec2
    }
    menuentry 'Ubuntu, with Linux 2.6.32-24-generic' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     linux    /boot/vmlinuz-2.6.32-24-generic root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro find_preseed=/preseed.cfg noprompt  quiet splash
     initrd    /boot/initrd.img-2.6.32-24-generic
    }
    ...
    

    Change the line set default=”0″ to a different kernel, in this case to “2” (count like 0,1,2 the menu entries)
    Now you could reboot your virtual machine because it will boot the previous kernel (that one you configured in grub.cfg)
    If you reboot please reset your network again (rm /etc/udev/rules.d/70-persistent-net.rules)

    BEFORE you create the bundle you must set the default to “0” ! Otherwise the ec2 instance will not start up and immediately terminate.
    (afterwards you should set it back to “2” to continue your local virtual machine)

    Check the kernel:
    user@ubuntu:~$ uname -a
    Linux ubuntu 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:17:33 UTC 2010 i686 GNU/Linux

  • Find a kernel
    You can choose a kernel to run on ec2, but the kernel is a location dependent AKI-ID.
    The cloud market is very useful to find the right kernel
    Cloud Market
  • Create a Bundle to upload
    ec2-bundle-vol -c /tmp/cert-xxxxxxxxx.pem -k /tmp/pk-xxxxxxxxx.pem –user {account_number} -d /disk2 -r i386 –kernel aki-{kernel_id} –no-inherit
    Use your Account number that you retrieved earlier from the AWS console and the 2 key files that you transfered to virtual machine.
    Use the kernel ID that you looked up at the cloud market.
    Depending on your hardware this process can easily take 20min and longer (my reference Intel Core 2 Duo 8600) ! 

    ...
    root@ubuntu:/disk2# ec2-bundle-vol -c /tmp/cert-xxxxxx.pem -k /tmp/pk-xxxxxx.pem --user xxxxxx -d /disk2 -r i386 --kernel aki-70067822 --no-inherit
    Copying / into the image file /disk2/image...
    Excluding:
     /sys/kernel/debug
     /sys/kernel/security
     /sys
     /
     /proc
     /sys/fs/fuse/connections
     /dev/pts
     /dev
     /dev
     /media
     /mnt
     /proc
     /sys
     /etc/udev/rules.d/70-persistent-net.rules
     /etc/udev/rules.d/z25_persistent-net.rules
     /disk2/image
     /mnt/img-mnt
    1+0 records in
    1+0 records out
    1048576 bytes (1.0 MB) copied, 0.0069198 s, 152 MB/s
    mke2fs 1.41.11 (14-Mar-2010)
    warning: Unable to get device geometry for /disk2/image
    Bundling image file...
    Splitting /disk2/image.tar.gz.enc...
    Created image.part.00
    Created image.part.01
    Created image.part.02
    Created image.part.03
    ...
    Created image.part.52
    Created image.part.53
    Created image.part.54
    Created image.part.55
    Created image.part.56
    Created image.part.57
    Generating digests for each part...
    Digests generated.
    Creating bundle manifest...
    ec2-bundle-vol complete.
    
  • Upload the bundle
    ec2-upload-bundle -b my.ubuntu.image -m /disk2/image.manifest.xml -a {your access key} -s {your secret access key} –part8
    the –part parameter is optional in case your upload fails half way.
    This depends very much on your uplink speed !
    [codesyntax lang=”xml” title=”Bundle Upload”]

    Uploaded image.part.55
    Uploaded image.part.56
    Uploaded image.part.57
    Uploading manifest …
    Uploaded manifest.
    Bundle uploaded ompleted.
    [/codesyntax]
  • Register AMI
    Go to your AWS console and open the S3 folder to see the uploaded files 

    S3 bucket with uploaded image

    Goto the EC2 tab and select AMI
    Register new AMI
    Enter the path as {your bucket name}/image.manifest.xml

    Register new AMI

  • Create an instance and start it up

    New instance

    You should use a security group with the ports 22 open (and icmp if you want to ping)

    Security Group

Problem Solving
As I stated in the beginning, none of my instances really started up successfully due to various problems.

  • After starting up the instance, I can ping it but any attempt to ssh fails with connection refused.
    user@wanaka-ubuntu:~/Desktop/amazon$ ping ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com
    PING ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx) 56(84) bytes of data.
    64 bytes from ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx): icmp_req=2 ttl=51 time=103 ms
    64 bytes from ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx): icmp_req=3 ttl=51 time=39.7 ms
    ^C
    --- ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com ping statistics ---
    7 packets transmitted, 6 received, 14% packet loss, time 6015ms
    rtt min/avg/max/mdev = 15.663/53.436/103.023/32.123 ms
    user@wanaka-ubuntu:~/Desktop/amazon$ ssh -i xxxx.pem root@ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com
    ssh: connect to host ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com port 22: Connection refused
    

    Instance logfile see here.

  • The instance is created with root device as instance store not EBS. I would prefer EBS !

Remarks and Outlook

  • I believe cloud computing, despite not being a completely new concept (you remember dumb amber screens in the 70’s and 80’s) is still in its infancy today. Already very powerful, but we can expect more finetuning in the near future that will allows us to scale hardware on the fly for a running instance.
  • We also should expect more standards that allow seamless excahnge of virtual instances between your local deployment and the cloud, or between cloud providers.