Increase size and type of AWS EBS volume

I was offline for quite a while because shifting from one continent to another. But now regular posts should be rolling in again.

I am running a couple of instances in pre-production requirement mode and changed from a standard EBS volume to a IOPS volume for the DB instance or the volume with the DB files. I could not identify a reasonable increase of performance, maybe a misconception that IOPS volumes will boost performance, rather provide a defined and consistent random access I/O throughput. I must admit I did not use a value higher than 1000.

Billing IOPS

Billing IOPS

Some recommended reading:

I decided to return to a standard ESB volume for my database as its performance did not benefit from the IOPS type (the DB is not overly busy too).
You cant change type and size of an EBS volume on the fly.

Here the steps to achieve the same: Continue reading

Enforce password for Ubuntu user on EC2 instances

Using linux (Ubuntu) instances on Amazon EC2 is a quite safe thing to do, at least measured by the security provided by the platform (security groups, ACL, physical security,..). I recommend reading their security site here. At the end of the day the server is only as secure as you configure it, if you choose to open all ports running services with their default configurations and password settings, Amazon can’t help you.

When connecting to a Ubuntu server with ssh you need to provide the keyfile (somekeyfile.pem) that you can download when creating the key pair.

Key file

Key file

This 2048 bit key is required to login as regular ubuntu user. What I dislike is the fact that this user can sudo all, so once someone manage to get into you user account, he has root access too. I recommend to set a password for the ubuntu user and change the sudoers configuration.

Change the password for user ubuntu

Open the sudoers include file

sudo vi /etc/suderos.d/90-cloudimg-ubuntu or sudo vi /etc/sudoers

change last line from

ubuntuĀ  ALL=(ALL) NOPASSWD:ALL

to

ubuntu ALL=(ALL) ALL

Glassfish and https running secure applications

By default Glassfish listens to http on port 8080 and https on port 8181.
It is better to listen to the default ports 80 for http and 443 for https, usually you dont want the user to enter port numbers as part of the URL.

Even the Glassfish Admin Console allows to change the ports (Configurations/Server Config/Network Config/Network Listener), certain server OS such as Ubuntu do not allow non-root users (you should run Glassfish as separate user !) to ports below 1024. We can achieve this by port rerouting with the iptables command (under Ubuntu)


iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8181
iptables-save -c > /etc/iptables.rules
iptables-restore < /etc/iptables.rules

vi /etc/network/if-pre-up.d/iptablesload
#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0

Additionally you can get a proper SSL certificate to stop annoying the user with a no proper certificate warning. See previous tutorial here.

SSL Error

SSL Error (Chrome)

If you operate an enterprise application with a known URL to the users, unlike a regular website where the portal should be reached with regular http, I would completely disable regular http.

Disable http

Disable http

Copy EC2 instance to another region

Is it finally possible ? While the AMI import tool is long awaited for but only available for Windows, it is rather a big hazzle to transfer manually (see this) any other OS ( my last attempt in 2010).

Today Amazon announced the EBS Snapshot Copy Feature (across regions). The intention is certainly to allow easy migration of data to another region, as you can copy the snapshot, create a volume and attach it to an instance. I was curious to try if I can migrate my Ubuntu instance to another region and it worked. You can use both command-line as well the AWS web admin.

Amazon S3 plugin for Jenkins CI again

About once a year I revisit (link) this topic again (usually when the plugin causes trouble). Now I get this signature error

AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID:..

The good news first:
The S3 plugin became mainstream, you can install it from the plugin page under Jenkins Administration | Plugin Manager. You dont need to build the plugin any longer by yourself and can skip the rest of this entry.

S3 Plugin

The long version:
It seems the error is caused by a ‘+’ sign in the access key troubling the encoding function used (see issue). The latest build (Sep 2012) should fix this problem.

If you want to build by yourself, you need to get the sourcecode from git and build the plugin file, beware as it requires Maven 3 now. Below instructions apply fro Ubuntu.

Upload plugin

 

 

Running FTP server on EC2 on demand

or ‘How to cutĀ  (even more) cost while running EC2 instances

I am running a FTP server on an EC2 instance (micro if you want), but we dont use it all the time. The server is run on-demand only and auto-shutdown every night. The challenge: on every new start of the instance you will get a new public ip which screws your passive ip address configuration in vsftpd.conf.

  • How to install and run vsftp on an EC2 Ubuntu instance.
  • How to switch off a Ubuntu EC2 instance ? Add this to the crontab:
    login as root
    crontab -e
    add: 0 12 * * * /sbin/shutdown \-h now
    
  • How to update vsftp.conf on start up ?
    pubip=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
    
    sed "s/pasv_address=.*/pasv_address=$pubip/"  /etc/vsftpd.conf > /etc/vsftpdTEMP.conf
    rm /etc/vsftpd.conf
    mv /etc/vsftpdTEMP.conf /etc/vsftpd.conf
    service vsftpd restart
    

    curl http://169.254.169.254/latest/meta-data/public-ipv4 gives you the public IP address of your instance.

Remaining challenge: If you dont want to spend money on an elastic (permanent) IP which costs you while the instance is NOT running, you need a DNS service like dyndns.com and update the dyndns entry on every start too. This can easily done by a shell script using ddclient or Ubuntu’s dyndns command.

Touchscreen Notebooks using Ubuntu

I purchased 2 notebooks with swivel-touch screens last weekend. Both coming with Windows 7 which I clonezilla’d, wiped out and installed Ubuntu immediately. Both are not an iPad killer whatsoever, but it suits my requirements: you can touch it, you can turn it (read books), it comes with a keyboard and I can load almost any application, even do some development work.

  • Asus EEE T101MT
    1.66 GHz Atom N450 CPU with hyperthreading
    10.1 inch screen, multi-touch resistive display with 1024 x 600 pixels resolution
    2 GB RAM and 320 GB HDD at 5400 RPM
    WiFi 802.11n
    4 cell 2400 mAh and 35 Wh battery pack, removable
    0.3 megapixel webcam
    3 USB ports,Ā  VGA output, Ethernet, Kensington Lock, Mic and Headphones jack and SD Card reader 

    Installing Ubuntu: A breeze with 10.10 (Maverick). All info here.

  • Acer Aspire 1825PTZ
    Intel Pentium processor SU4100 (1.3 GHz, 800 MHz FSB)
    2GB Memory
    Graphics Controller: Intel GMA 4500MHD
    11.6″ Acer CineCrystal LED LCD With (capacitive) Multi Touch(1366×768)
    320GB HD
    0.3 megapixel webcam
    3 USB ports,Ā  VGA output, HDMI Port,Ethernet, Kensington Lock, Mic and Headphones jack and SD/XD/MS Card reader 

    Installing Ubuntu: Basic Installation straight forward, but requires some hacking to get the touchscreen properly running and the auto-rotate screen. But you find all answers in this thread. And some more tricks here.

How to run a ftp server on an Amazon Micro Instance

A micro instance which runs for your with Linux at 0.025 U$ per hour (around 18 U$ a month) is just right to operate a FTP server. Plus the data transfer which costs you 0.1U$ IN and around 0.15U$ OUT.
There is only a minor challenge to get started, the elastic IP assignment which makes it impossible to connect to the ftp server in passive mode out of-the-box.
This short tutorial describes how to get started and covers also the use of virtual users (we skip the basic art assuming you are familar with creating instances and the handling of key-files etc.).

I advise to create a separate volume in EC2 if you plan to ftp large amount of files or eventually opt for a bigger instance.

How to add a volume:

  • Create a new volume specifying a suitable size (you pay for the size you allocate not for the size you use inside the volume!)
  • Attach it to the instance (define a device, eg. /dev/sdf)
  • Login to you instance format the volume (mkfs -t ext2 /dev/sdf)
  • Create a mountpoint (mkdirĀ  /mnt/ftpvolume)
  • Mount the volume (mount /dev/sdf /mn/ftpvolume)
    Be aware: you need to mount every time you restart the instance ! There are scripts to do it automatically, but this is not straight forward in EC2)

How to install and configure the ftp service:

  • Look for an Ubuntu i386 server AMI in your preferred region and create a new instance.
  • Use a security group with an open port 21 and the passive ports (eg.62222 to 63333 as configured below).
  • Create an elastic IP and attach it to the new instance.
  • Login the instance (using ssh and your private key).
  • Add the ftp server vsftpd package (sudo apt-get install vsftpd.conf)
  • Add the libpam package which we need to maintain the virtual users (sudo apt-get install libpam-pwdfile)
  • Add the mini-httpd package which contains the hptasswd command we need to enter the passwords (apt-get install mini-httpd)
  • Configure PAM (vi /etc/pam.d/vsftpd)
    Remove other content in this file.

    auth required pam_pwdfile.so pwdfile /etc/ftpd.passwd
    account required pam_permit.so
    
  • Configure vsftpd (vi /etc/vsftpd.conf)
    This shows only the important changes and new entries

    ...
    local_enable=YES
    ...
    write_enable=YES
    ...
    local_umask=022
    ...
    chroot_local_user=YES
    ...
    virtual_use_local_privs=YES
    guest_enable=YES
    user_sub_token=$USER
    local_root=/mnt/ftpvolume/ftphome/$USER {or whatever your ftp root folder is going to be}
    hide_ids=YES
    pasv_min_port=62222
    pasv_max_port=63333
    pasv_address={your Elastic IP}
    
  • Restart vsftpd (service vsftpd restart)
  • Create the root directory for the ftp service as defined in the config file
  • Create user and user directory
    For the first user you add
    htpasswd -c /etc/ftpd.passwd Username
    subsequent users
    htpasswd /etc/ftpd.passwd Username
    mkdir /mnt/ftpvolume/ftphome/username
    chmod 777 /mnt/ftpvolume/ftphome/username
  • Create a superuser ftpadmin with access to all user directories
    Instead of creating own folder, create a link
    ln -s /mnt/ftpvolume/ftphome ftpadmin

Remarks: This might not be best practice, but
a) for the EC2 instance you open only port 32
b) vsftpd is the best choice for secure ftp
c) each virtual user is locked into his home-folder.

Feel free to add comments in regards of security.

Going Flex 4 with Linux

I am evaluating a few RIA options to create front-ends that go beyond swing applications and can live outside a browser, but still being cross-platform. I focused on JavaFX for a while, but the agenda by Oracle was changed and they abandoned they JavaFX scripting and working on a new roadmap (more info here). Though it looks promising, we need to wait until 2011 to see the release 2.0.
Looking at alternatives I only see Flash/Flex/Air (all by Adobe), with Flex as a product that can be “enterprise’d”, means run with a JEE backend. Unfortunately Adobe forgot to keep Linux users on board with their latest version, Flashbuilder 4. The commercial product only runs on Mac and Windows, forcing the Linux community to connect the free SDK to another IDE or rather implementing their own plugin. To quote Adobe

Adobe will no longer be investing in the  development of a version of AdobeĀ® FlexĀ® Builderā„¢ or
Adobe FlashĀ®  Builderā„¢ that runs on Linux operating systems.

I can highlight this blog with a summary as per March 2010.

Otherwise I summarise what I did to get it running with Eclipse Galileo on Ubuntu following the creator of the axdt plugin:

Comments:

  • I cant judge beyond this gettings-started level. Just getting my hands wet with FLEX.
  • There is a a Netbeans plugin running on 6.5, but the project seems to be abandoned.

 

 

Creating an Ubuntu 10.04 AMI using a local VMWare

I am using Amazon EC2 and S3 now more often, and our architecture, development and deployment partially relies on Amazon. For example, we save artifacts from our Build-Server on S3 and deploy the application for trial and testing in the EC2 cloud. The level of control you have over your instances and buckets is just great, and new features (like VPC, SNS) are added frequently. The API allows me to remote control our infrastructure without using the browser.

No one can say, you wouldnt find a fitting Linux distribution on EC2. It seems there are myriads of AMI’s and almost all popular Linux distros are available for you to get started. But being a control-freak I prefer a slightly different approach. We create a virtual appliance in-house (our products runs out-of-the-box) and use the appliance for local development and tests. I maintain reference appliances knowing exactly which kernel and which packages are running. For large number deployment is essential that all instances are identical. Unfortunately there is no straightforward solution to “upload” your vmdk to EC2 (or any of the few other cloud/IAAS providers) that allow upload and expect it running (due to a couple of technical facts in the background that usually are transparent to a cloud user, eg. XEN specific kernels, etc).

Collecting some inputs and tutorials from various sources I tried to create my local UBUNTU 10.04 LTS server on VMWare Workstation (Player) and get it running as EC2 instance.

I summarize the process here.

Warning: I still face a major issue with the instance (created from the uploaded AMI) which can be started, but it is not possible to connect via SSH. I will updated this blog as soon I (hopefully with your help) find the solution.

Pre-Requirements:

  • VMWare Workstation/Player or VirtualBox
    It does not matter which tool you use because during the process with create a bundle “inside” the running server. We are not converting a vmdk file or similar (which is also possible)
    For this tutorial I assume you have it downloaded and installed (There is a 30 day trial version of VMWare Workstation available with some more features than the player).
  • Ubuntu 10.04 Server LTS (or any other version, recommend 8.04 or later)
    You installed the basic server as Virtual Machine and can login as root. The installation process is simple enough and not covered here.
  • AMAZON AWS account
    You have an active AWS account with access to S3 and EC2.

Tutorial Part A (getting keys and certificates from Amazon AWS)

  • Login into your AWS account and navigate to Account |Security Credentials
  • Take note of your Access Key and Secret Access Key
    Take note of your Account Number (at the top right under your name)
    (one keypair should be created by default when you create an AWS account) 

    AWS Access Keys

  • Create and Download X.509 Certificates
    Please read the warning: The private key can only be created and downloaded 1 time ! Download both to your desktop.
    1 Certificate File: cert-{some_random_key}.pem
    1 Private Key File: pk-{some_random_key}.pem 

    X.509 Certificates

  • Create a bucket in S3
    Please note the bucket name must be unique worldwide. You can use something like “mycompanyname.images” or similar.
    By default the bucket is private. 

    Create S3 bucket

Tutorial Part B (Preparation of the Ubuntu Server)
I assume you already installed a Virtual Machine with Ubuntu Server 10.04 (without any extra packages). All steps performed as root user (via sudo or you “change” to root with sudo -i)

  • Add a drive to the instance

    Virtual Machine

    Edit virtual machine settings | Add.. | Hard Disk | Create new virtual disk | SCSI | 10GB | Store as single file

    Virtual Machine Settings

  • Power on the virtual machine

    Virtual Machine

  • Mount the additional harddisk
    mkdir /disk2
    mkfs -t ext2 /dev/sdb
    mount /dev/sdb /disk2

    root@ubuntu:~# mkdir /disk2
    root@ubuntu:~# mkfs -t ext2 /dev/sdb
    mke2fs 1.41.11 (14-Mar-2010)
    /dev/sdb is entire device, not just one partition!
    Proceed anyway? (y,n) y
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2684354560
    80 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
     32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Writing inode tables: done
    Writing superblocks and filesystem accounting information: done
    
    This filesystem will be automatically checked every 25 mounts or
    180 days, whichever comes first.Ā  Use tune2fs -c or -i to override.
    root@ubuntu:~# mount /dev/sdb /disk2
    root@ubuntu:~# cd /disk2
    root@ubuntu:/disk2# ls
    lost+found
    root@ubuntu:/disk2#
    
  • Install SSH server
    Otherwise we cant access the instance later. It is also easier to work with ssh session connect to our local instance.
    apt-get install openssh-server
  • Install FTP Server
    We need to transfer files to our instance.
    apt-get install vsftpd
    Remember to configure /etc/vsftpd.conf
    write_enable=YES
    local_enable=YES
    and restart vsftpd
    service vsftpd restart
  • Disable the firewall
    ufw disable
    We configure the firewall with Amazon console
  • Install the EC2 AMI Tools
    apt-get install ec2-ami-tools
  • Transfer the 2 key files to /tmp
    with ftp from your local machine/desktop. In /tmp it will not be bundled with your AMI later.
  • Delete network info
    rm /etc/udev/rules.d/70-persistent-net.rules
  • Install ec2 kernel
    Make sure the universe entry in /etc/apt/sources.list is enabled.
    apt-get update
    apt-get install linux-image-ec2
    Do not restart !

    root@ubuntu:~# apt-get install linux-image-ec2
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following extra packages will be installed:
     linux-image-2.6.32-309-ec2
    Suggested packages:
     fdutils linux-ec2-doc-2.6.32 linux-ec2-source-2.6.32
    The following NEW packages will be installed:
     linux-image-2.6.32-309-ec2 linux-image-ec2
    0 upgraded, 2 newly installed, 0 to remove and 30 not upgraded.
    Need to get 19.2MB of archives.
    After this operation, 57.6MB of additional disk space will be used.
    Do you want to continue [Y/n]? y
    Get:1 http://sg.archive.ubuntu.com/ubuntu/ lucid-updates/main linux-image-2.6.32-309-ec2 2.6.32-309.18 [19.2MB]
    Get:2 http://sg.archive.ubuntu.com/ubuntu/ lucid-updates/main linux-image-ec2 2.6.32.309.10 [3,276B]
    Fetched 19.2MB in 51s (375kB/s)
    Selecting previously deselected package linux-image-2.6.32-309-ec2.
    (Reading database ... 28138 files and directories currently installed.)
    Unpacking linux-image-2.6.32-309-ec2 (from .../linux-image-2.6.32-309-ec2_2.6.32-309.18_i386.deb) ...
    Done.
    Selecting previously deselected package linux-image-ec2.
    Unpacking linux-image-ec2 (from .../linux-image-ec2_2.6.32.309.10_i386.deb) ...
    Setting up linux-image-2.6.32-309-ec2 (2.6.32-309.18) ...
    Running depmod.
    update-initramfs: Generating /boot/initrd.img-2.6.32-309-ec2
    Running postinst hook script /usr/sbin/update-grub.
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-2.6.32-309-ec2
    Found initrd image: /boot/initrd.img-2.6.32-309-ec2
    Found linux image: /boot/vmlinuz-2.6.32-24-generic
    Found initrd image: /boot/initrd.img-2.6.32-24-generic
    Found linux image: /boot/vmlinuz-2.6.32-21-generic
    Found initrd image: /boot/initrd.img-2.6.32-21-generic
    Found memtest86+ image: /boot/memtest86+.bin
    done
    
    Setting up linux-image-ec2 (2.6.32.309.10) ...
    

    Resulting boot directory

    /boot

    Do not reboot. The new default kernel is the ec2 kernel, the virtual machine will NOT boot anymore !

  • Adjust default kernel in grub
    Edit your /boot/grub/grub.cfg (This is not good practice because any update-grub trashes your manual changes!)

    ...
    ### BEGIN /etc/grub.d/00_header ###
    if [ -s $prefix/grubenv ]; then
     load_env
    fi
    set default="2"
    if [ ${prev_saved_entry} ]; then
     set saved_entry=${prev_saved_entry}
     save_env saved_entry
    ...
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Ubuntu, with Linux 2.6.32-309-ec2' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     linuxĀ Ā Ā  /boot/vmlinuz-2.6.32-309-ec2 root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro find_preseed=/preseed.cfg nopromptĀ  quiet splash
     initrdĀ Ā Ā  /boot/initrd.img-2.6.32-309-ec2
    }
    menuentry 'Ubuntu, with Linux 2.6.32-309-ec2 (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     echoĀ Ā Ā  'Loading Linux 2.6.32-309-ec2 ...'
     linuxĀ Ā Ā  /boot/vmlinuz-2.6.32-309-ec2 root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro single find_preseed=/preseed.cfg noprompt
     echoĀ Ā Ā  'Loading initial ramdisk ...'
     initrdĀ Ā Ā  /boot/initrd.img-2.6.32-309-ec2
    }
    menuentry 'Ubuntu, with Linux 2.6.32-24-generic' --class ubuntu --class gnu-linux --class gnu --class os {
     recordfail
     insmod ext2
     set root='(hd0,1)'
     search --no-floppy --fs-uuid --set ab6ee13e-e9c8-4654-aad1-a94c69906e11
     linuxĀ Ā Ā  /boot/vmlinuz-2.6.32-24-generic root=UUID=ab6ee13e-e9c8-4654-aad1-a94c69906e11 ro find_preseed=/preseed.cfg nopromptĀ  quiet splash
     initrdĀ Ā Ā  /boot/initrd.img-2.6.32-24-generic
    }
    ...
    

    Change the line set default=”0″ to a different kernel, in this case to “2” (count like 0,1,2 the menu entries)
    Now you could reboot your virtual machine because it will boot the previous kernel (that one you configured in grub.cfg)
    If you reboot please reset your network again (rm /etc/udev/rules.d/70-persistent-net.rules)

    BEFORE you create the bundle you must set the default to “0” ! Otherwise the ec2 instance will not start up and immediately terminate.
    (afterwards you should set it back to “2” to continue your local virtual machine)

    Check the kernel:
    user@ubuntu:~$ uname -a
    Linux ubuntu 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:17:33 UTC 2010 i686 GNU/Linux

  • Find a kernel
    You can choose a kernel to run on ec2, but the kernel is a location dependent AKI-ID.
    The cloud market is very useful to find the right kernel
    Cloud Market
  • Create a Bundle to upload
    ec2-bundle-vol -c /tmp/cert-xxxxxxxxx.pem -k /tmp/pk-xxxxxxxxx.pem –user {account_number} -d /disk2 -r i386 –kernel aki-{kernel_id} –no-inherit
    Use your Account number that you retrieved earlier from the AWS console and the 2 key files that you transfered to virtual machine.
    Use the kernel ID that you looked up at the cloud market.
    Depending on your hardware this process can easily take 20min and longer (my reference Intel Core 2 Duo 8600) !Ā 

    ...
    root@ubuntu:/disk2# ec2-bundle-vol -c /tmp/cert-xxxxxx.pem -k /tmp/pk-xxxxxx.pem --user xxxxxx -d /disk2 -r i386 --kernel aki-70067822 --no-inherit
    Copying / into the image file /disk2/image...
    Excluding:
     /sys/kernel/debug
     /sys/kernel/security
     /sys
     /
     /proc
     /sys/fs/fuse/connections
     /dev/pts
     /dev
     /dev
     /media
     /mnt
     /proc
     /sys
     /etc/udev/rules.d/70-persistent-net.rules
     /etc/udev/rules.d/z25_persistent-net.rules
     /disk2/image
     /mnt/img-mnt
    1+0 records in
    1+0 records out
    1048576 bytes (1.0 MB) copied, 0.0069198 s, 152 MB/s
    mke2fs 1.41.11 (14-Mar-2010)
    warning: Unable to get device geometry for /disk2/image
    Bundling image file...
    Splitting /disk2/image.tar.gz.enc...
    Created image.part.00
    Created image.part.01
    Created image.part.02
    Created image.part.03
    ...
    Created image.part.52
    Created image.part.53
    Created image.part.54
    Created image.part.55
    Created image.part.56
    Created image.part.57
    Generating digests for each part...
    Digests generated.
    Creating bundle manifest...
    ec2-bundle-vol complete.
    
  • Upload the bundle
    ec2-upload-bundle -b my.ubuntu.image -m /disk2/image.manifest.xml -a {your access key} -s {your secret access key} –part8
    the –part parameter is optional in case your upload fails half way.
    This depends very much on your uplink speed !
    [codesyntax lang=”xml” title=”Bundle Upload”]

    Uploaded image.part.55
    Uploaded image.part.56
    Uploaded image.part.57
    Uploading manifest …
    Uploaded manifest.
    Bundle uploaded ompleted.
    [/codesyntax]
  • Register AMI
    Go to your AWS console and open the S3 folder to see the uploaded files 

    S3 bucket with uploaded image

    Goto the EC2 tab and select AMI
    Register new AMI
    Enter the path as {your bucket name}/image.manifest.xml

    Register new AMI

  • Create an instance and start it up

    New instance

    You should use a security group with the ports 22 open (and icmp if you want to ping)

    Security Group

Problem Solving
As I stated in the beginning, none of my instances really started up successfully due to various problems.

  • After starting up the instance, I can ping it but any attempt to ssh fails with connection refused.
    user@wanaka-ubuntu:~/Desktop/amazon$ ping ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com
    PING ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx) 56(84) bytes of data.
    64 bytes from ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx): icmp_req=2 ttl=51 time=103 ms
    64 bytes from ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com (175.41.xxx.xxx): icmp_req=3 ttl=51 time=39.7 ms
    ^C
    --- ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com ping statistics ---
    7 packets transmitted, 6 received, 14% packet loss, time 6015ms
    rtt min/avg/max/mdev = 15.663/53.436/103.023/32.123 ms
    user@wanaka-ubuntu:~/Desktop/amazon$ ssh -i xxxx.pem root@ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com
    ssh: connect to host ec2-175-41-xxx-xxx.ap-southeast-1.compute.amazonaws.com port 22: Connection refused
    

    Instance logfile see here.

  • The instance is created with root device as instance store not EBS. I would prefer EBS !

Remarks and Outlook

  • I believe cloud computing, despite not being a completely new concept (you remember dumb amber screens in the 70’s and 80’s) is still in its infancy today. Already very powerful, but we can expect more finetuning in the near future that will allows us to scale hardware on the fly for a running instance.
  • We also should expect more standards that allow seamless excahnge of virtual instances between your local deployment and the cloud, or between cloud providers.