Android Development Restarted

It has been quite a while since I touched an Android phone the last time for code projects. I got in contact first time with an Android phone during an open source conference in 2008 in Sydney when I met Chris DiBona (Director of Open Source at Google). Announcing the SDK 1.0. Soon after I got the G1, aka HTC Dream phone which was the first Android phone available. I could not even imagine this platform would be so widespread adopted and pushed in the years to come. I was even thinking about the investment that time, spending a few hundred dollars on a phone that might be just a experiment. In 2010 I also bought the Nexus One.

Anyway I created some apps for personal use, experimented with the apps market but due to other development and work focus lost it out of sight and just remained normal Android user.

Now my interest returned, at least to update my knowledge about this technology. Today things are becoming a bit easier (IDE, documentation) but also more complex, mainly due to the massive range of devices and manufacturers which makes screen design quite challenging, but also to security concerns as more spam and junk apps are around, users are no longer so flexible with the app security settings.

Coding becomes more convenient, now Android got its own IDE, the Android Studio. After an initial download and subsequent additional downloads of required packages you can start with your projects straight away.

With Ubuntu just just download the linux package, make sure you have a JDK installed, and execute the studio.sh shellscript in the bin folder.

Android Studio

Android Studio

Enforce password for Ubuntu user on EC2 instances

Using linux (Ubuntu) instances on Amazon EC2 is a quite safe thing to do, at least measured by the security provided by the platform (security groups, ACL, physical security,..). I recommend reading their security site here. At the end of the day the server is only as secure as you configure it, if you choose to open all ports running services with their default configurations and password settings, Amazon can’t help you.

When connecting to a Ubuntu server with ssh you need to provide the keyfile (somekeyfile.pem) that you can download when creating the key pair.

Key file

Key file

This 2048 bit key is required to login as regular ubuntu user. What I dislike is the fact that this user can sudo all, so once someone manage to get into you user account, he has root access too. I recommend to set a password for the ubuntu user and change the sudoers configuration.

Change the password for user ubuntu

Open the sudoers include file

sudo vi /etc/suderos.d/90-cloudimg-ubuntu or sudo vi /etc/sudoers

change last line from

ubuntu  ALL=(ALL) NOPASSWD:ALL

to

ubuntu ALL=(ALL) ALL

Glassfish and https running secure applications

By default Glassfish listens to http on port 8080 and https on port 8181.
It is better to listen to the default ports 80 for http and 443 for https, usually you dont want the user to enter port numbers as part of the URL.

Even the Glassfish Admin Console allows to change the ports (Configurations/Server Config/Network Config/Network Listener), certain server OS such as Ubuntu do not allow non-root users (you should run Glassfish as separate user !) to ports below 1024. We can achieve this by port rerouting with the iptables command (under Ubuntu)


iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8181
iptables-save -c > /etc/iptables.rules
iptables-restore < /etc/iptables.rules

vi /etc/network/if-pre-up.d/iptablesload
#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0

Additionally you can get a proper SSL certificate to stop annoying the user with a no proper certificate warning. See previous tutorial here.

SSL Error

SSL Error (Chrome)

If you operate an enterprise application with a known URL to the users, unlike a regular website where the portal should be reached with regular http, I would completely disable regular http.

Disable http

Disable http

Copy EC2 instance to another region

Is it finally possible ? While the AMI import tool is long awaited for but only available for Windows, it is rather a big hazzle to transfer manually (see this) any other OS ( my last attempt in 2010).

Today Amazon announced the EBS Snapshot Copy Feature (across regions). The intention is certainly to allow easy migration of data to another region, as you can copy the snapshot, create a volume and attach it to an instance. I was curious to try if I can migrate my Ubuntu instance to another region and it worked. You can use both command-line as well the AWS web admin.

Amazon S3 plugin for Jenkins CI again

About once a year I revisit (link) this topic again (usually when the plugin causes trouble). Now I get this signature error

AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID:..

The good news first:
The S3 plugin became mainstream, you can install it from the plugin page under Jenkins Administration | Plugin Manager. You dont need to build the plugin any longer by yourself and can skip the rest of this entry.

S3 Plugin

The long version:
It seems the error is caused by a ‘+’ sign in the access key troubling the encoding function used (see issue). The latest build (Sep 2012) should fix this problem.

If you want to build by yourself, you need to get the sourcecode from git and build the plugin file, beware as it requires Maven 3 now. Below instructions apply fro Ubuntu.

Upload plugin

 

 

Glassfish 3.1 – Clustering Tutorial Part2 (sessions)

In the previous part (link) I get you running with a simple Glassfish 3.1 cluster setup with 2 instances running on 2 nodes. Now we have a cluster setup and can deploy an application one time and it will run on both nodes, no big deal, it doesn’t get us anywhere. So in part 2 we will do some modifications to our virtualbox server setup and create a web application with sessions replicated to both instances.

Prerequisites:

  • The server and cluster setup from part 1 (link)

Preparation of host and Virtualbox guests:

In part 1 we used n1 and n2 as hostname, this creates trouble for this part. A key information for the sessions is the server-hostname (domain) and we cannot share sessions between totally different hosts.

  1. Update guest server hostname
    Change etc/hosts and /etc/hostname
    Server n1 becomes n1.test.com

    127.0.0.1    localhost
    127.0.1.1    n1.test.com
    127.0.0.1    n1.test.com
    192.168.56.102  n2.test.com
    

    Server n2 becomes n2.test.com

    127.0.0.1    localhost
    127.0.1.1    n2.test.com
    127.0.0.1    n2.test.com
    192.168.56.101  n1.test.com
    

    Remarks:

    • You can use any domain name (other than test.com, which I dont own btw).
    • The IP addresses must fit your own Virtualbox setup.
  2. Update your host
    The host running Virtualbox. Change /etc/hosts

    192.168.56.101  n1.test.com
    192.168.56.102  n2.test.com
    
  3. Check multicast
    The communication between the nodes is using multicast for the session replication. The Glassfish team gives us a tool to verify if your servers can “see” each other.
    Go to the bin folder of the Glassfish installation and execute ./asadmin validate-multicast on both nodes
    You should get feedback like this Continue reading

Glassfish 3.1 – Clustering Tutorial

Glassfish Clustering, after being absent from version 3, made its re-debut after 2.1 in the current version 3.1
I was eager to get my hands on and tried to make sense of some information from various sources (see reference).

Clustering is quite a sophisticated subject, which you dont need to cover during development time, but at some stage (deployment to production) you better off knowing how it works and verify your application runs in the clustering environment.

I compiled the most essential steps in this instant-15min-tutorial creating the most simple cluster: 2 nodes with 1 instance each, 1 node also runs the DAS.

Glassfish Cluster

Continue reading

Installing pgAdminIII for PostgreSQL 9 on Ubuntu

pgAdmin is the best GUI you can use to administrate PostgreSQL, unffortunately the Ubuntu default packages still offer only PostgreSQL 8.4 and an older version of pgAdmin III that does not support PostgreSQL 9.0.x. Thanks to Martin Pitt who maintains the latest packages you can run and maintain the latest PostgreSQL versions.

If you run Maverick:

  • sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys 8683D8A2
  • sudo apt-get update
  • sudo apt-get install pgadmin3

Check here for latest or other releases of Ubuntu.

Touchscreen Notebooks using Ubuntu

I purchased 2 notebooks with swivel-touch screens last weekend. Both coming with Windows 7 which I clonezilla’d, wiped out and installed Ubuntu immediately. Both are not an iPad killer whatsoever, but it suits my requirements: you can touch it, you can turn it (read books), it comes with a keyboard and I can load almost any application, even do some development work.

  • Asus EEE T101MT
    1.66 GHz Atom N450 CPU with hyperthreading
    10.1 inch screen, multi-touch resistive display with 1024 x 600 pixels resolution
    2 GB RAM and 320 GB HDD at 5400 RPM
    WiFi 802.11n
    4 cell 2400 mAh and 35 Wh battery pack, removable
    0.3 megapixel webcam
    3 USB ports,  VGA output, Ethernet, Kensington Lock, Mic and Headphones jack and SD Card reader 

    Installing Ubuntu: A breeze with 10.10 (Maverick). All info here.

  • Acer Aspire 1825PTZ
    Intel Pentium processor SU4100 (1.3 GHz, 800 MHz FSB)
    2GB Memory
    Graphics Controller: Intel GMA 4500MHD
    11.6″ Acer CineCrystal LED LCD With (capacitive) Multi Touch(1366×768)
    320GB HD
    0.3 megapixel webcam
    3 USB ports,  VGA output, HDMI Port,Ethernet, Kensington Lock, Mic and Headphones jack and SD/XD/MS Card reader 

    Installing Ubuntu: Basic Installation straight forward, but requires some hacking to get the touchscreen properly running and the auto-rotate screen. But you find all answers in this thread. And some more tricks here.

How to run a ftp server on an Amazon Micro Instance

A micro instance which runs for your with Linux at 0.025 U$ per hour (around 18 U$ a month) is just right to operate a FTP server. Plus the data transfer which costs you 0.1U$ IN and around 0.15U$ OUT.
There is only a minor challenge to get started, the elastic IP assignment which makes it impossible to connect to the ftp server in passive mode out of-the-box.
This short tutorial describes how to get started and covers also the use of virtual users (we skip the basic art assuming you are familar with creating instances and the handling of key-files etc.).

I advise to create a separate volume in EC2 if you plan to ftp large amount of files or eventually opt for a bigger instance.

How to add a volume:

  • Create a new volume specifying a suitable size (you pay for the size you allocate not for the size you use inside the volume!)
  • Attach it to the instance (define a device, eg. /dev/sdf)
  • Login to you instance format the volume (mkfs -t ext2 /dev/sdf)
  • Create a mountpoint (mkdir  /mnt/ftpvolume)
  • Mount the volume (mount /dev/sdf /mn/ftpvolume)
    Be aware: you need to mount every time you restart the instance ! There are scripts to do it automatically, but this is not straight forward in EC2)

How to install and configure the ftp service:

  • Look for an Ubuntu i386 server AMI in your preferred region and create a new instance.
  • Use a security group with an open port 21 and the passive ports (eg.62222 to 63333 as configured below).
  • Create an elastic IP and attach it to the new instance.
  • Login the instance (using ssh and your private key).
  • Add the ftp server vsftpd package (sudo apt-get install vsftpd.conf)
  • Add the libpam package which we need to maintain the virtual users (sudo apt-get install libpam-pwdfile)
  • Add the mini-httpd package which contains the hptasswd command we need to enter the passwords (apt-get install mini-httpd)
  • Configure PAM (vi /etc/pam.d/vsftpd)
    Remove other content in this file.

    auth required pam_pwdfile.so pwdfile /etc/ftpd.passwd
    account required pam_permit.so
    
  • Configure vsftpd (vi /etc/vsftpd.conf)
    This shows only the important changes and new entries

    ...
    local_enable=YES
    ...
    write_enable=YES
    ...
    local_umask=022
    ...
    chroot_local_user=YES
    ...
    virtual_use_local_privs=YES
    guest_enable=YES
    user_sub_token=$USER
    local_root=/mnt/ftpvolume/ftphome/$USER {or whatever your ftp root folder is going to be}
    hide_ids=YES
    pasv_min_port=62222
    pasv_max_port=63333
    pasv_address={your Elastic IP}
    
  • Restart vsftpd (service vsftpd restart)
  • Create the root directory for the ftp service as defined in the config file
  • Create user and user directory
    For the first user you add
    htpasswd -c /etc/ftpd.passwd Username
    subsequent users
    htpasswd /etc/ftpd.passwd Username
    mkdir /mnt/ftpvolume/ftphome/username
    chmod 777 /mnt/ftpvolume/ftphome/username
  • Create a superuser ftpadmin with access to all user directories
    Instead of creating own folder, create a link
    ln -s /mnt/ftpvolume/ftphome ftpadmin

Remarks: This might not be best practice, but
a) for the EC2 instance you open only port 32
b) vsftpd is the best choice for secure ftp
c) each virtual user is locked into his home-folder.

Feel free to add comments in regards of security.