It was a matter of time until Amazon AWS would react to the recent price reduction by Google. With effect of April 1st they cut prices for EC2 and S3 massively again (up to 65% on S3 and 40% on EC2). For now the customer is the winner, lets observe how it develops on the long term.
I was offline for quite a while because shifting from one continent to another. But now regular posts should be rolling in again.
I am running a couple of instances in pre-production requirement mode and changed from a standard EBS volume to a IOPS volume for the DB instance or the volume with the DB files. I could not identify a reasonable increase of performance, maybe a misconception that IOPS volumes will boost performance, rather provide a defined and consistent random access I/O throughput. I must admit I did not use a value higher than 1000.
Some recommended reading:
- Increasing EBS Performance (by Amazon)
- Benchmarking (by Amazon)
- Even Stranger than Expected: a Systematic Look at EC2 I/O (by Scalyr)
I decided to return to a standard ESB volume for my database as its performance did not benefit from the IOPS type (the DB is not overly busy too).
You cant change type and size of an EBS volume on the fly.
Here the steps to achieve the same: Continue reading
Using linux (Ubuntu) instances on Amazon EC2 is a quite safe thing to do, at least measured by the security provided by the platform (security groups, ACL, physical security,..). I recommend reading their security site here. At the end of the day the server is only as secure as you configure it, if you choose to open all ports running services with their default configurations and password settings, Amazon can’t help you.
When connecting to a Ubuntu server with ssh you need to provide the keyfile (somekeyfile.pem) that you can download when creating the key pair.
This 2048 bit key is required to login as regular ubuntu user. What I dislike is the fact that this user can sudo all, so once someone manage to get into you user account, he has root access too. I recommend to set a password for the ubuntu user and change the sudoers configuration.
Change the password for user ubuntu
Open the sudoers include file
sudo vi /etc/suderos.d/90-cloudimg-ubuntu or sudo vi /etc/sudoers
change last line from
ubuntu ALL=(ALL) NOPASSWD:ALL
ubuntu ALL=(ALL) ALL
Finally this feature is available and easy as the click of a button. While it was previously almost impossible and last year through snapshots only you can select any AMI and copy to another region. It makes my life much easier and I stop maintaining reference images for every region but can make use of one image only ! More info here.
Is it finally possible ? While the AMI import tool is long awaited for but only available for Windows, it is rather a big hazzle to transfer manually (see this) any other OS ( my last attempt in 2010).
Today Amazon announced the EBS Snapshot Copy Feature (across regions). The intention is certainly to allow easy migration of data to another region, as you can copy the snapshot, create a volume and attach it to an instance. I was curious to try if I can migrate my Ubuntu instance to another region and it worked. You can use both command-line as well the AWS web admin.
- Create a snapshot of a volume in your source region
- Continue reading
To say it upfront: Usually there is no need to run an Ubuntu server with a desktop in the cloud. Whatever you do on the desktop you can do in a terminal too (assuming you dont want to use GIMP in the cloud). Here a little summary to get you started with a Precise Pangolin desktop running in the cloud.
Security: We will not use VNC, but NX. VNC is not secure (though can be tunnelled through SSH) and it works by sending compressed bitmaps of the screen, which is slower and less accurate than a NX server (X Server calls, Unix/Linux only)
Requirements: Amazon AWS account
- Log into your AWS account
- Optional: Create a security group with port 22 inbound only
or ‘ Save ultimately more money with AWS’
I use EC2 instances for test, development, demo and also for deployment to production. Amazon offers different types of instances, ranging from a micro instance (613 MB Ram and 2 CPU units) to a full fledge Cluster Compute Quadruple Extra Large Instance (60GB RAM and 33 CPU units). Of course a different price and paid per hour usage, available anytime.
All on demand Linux instances (Singapore):
- Micro instance: U$ 0.02 per hour
- Medium instance: U$ 0.34 per hour
- High Mem/CPU instance: U$ 2.024 per hour
On top of this there are 3 different categories of instances (in contractual terms)
Some price comparison for a m1.Large instance we use for testing (7,5GB RAM and 4 CPU units)
- On Demand (any time without any contractual obligations, we are using them currently)
$0.340 per Hour > 1 month U$ 244.80 (fulltime 24h)
- Reserved Instance (1 year term, one time payment U$ 276.00)
U$ 0.196 per Hour > 1 month U$ 141.12 (3 months: U$ 699.36 vs on-demand U$ 734.40, 12 months: U$ 1969.44 vs. on-demand U$ 2937.60 = ~30% savings )
- Spot Instance (depends on availability, you bid on a price range, if price exceeds your limit your instance shuts down)
U$ 0.04 per Hour (as of December 5th 2012) > 1 month U$ 28.80
The spot instance, almost at 10% of the on-demand price, is extremely attractive and I am using it as test server.
Not suitable for production or demo purpose though.
The reserved instance starts to break even after 3 months full-time usage !
In order not to pay for instances running idle (at night, weekend) they auto-shutdown and the user can start them in a self provision fashion (for test, demo or training).
Interesting enough, the price fluctuation is very different in the AWS regions. Lets look at a m1.large instance type in the Ireleand versus Singapore datacentre.
Obviously Singapore customers are not into this bidding concept, it remains permanently at 4cts while for Ireland the price jumps up to several Dollars !
More information at:
I use S3 for all kind of backup purpose, assuming S3 will be always available and the data/files are secure. Amazon offers a encryption of storage, but at the end of the day they are in control and not you. This encryption is rendered useless the moment someone picks up your AWS keys (I am not jumping into more paranoid scenarios covered under the Patriot Act where the US government may access your data, since Amazon is an US based company). I prefer to encrypt at the source, here a 2-liner to add to your backup procedure (Linux). You can even combine this with a cronjob including the S3 transfer using the S3cmd toolkit.
Encrypt on the fly:
tar cz whichfile | openssl des3 -salt -out whichfile.tar.gz.enc -k mysecretpassword
Decrypt and uncompress:
openssl des3 -d -salt -in whichfile.tar.gz.enc -out whichfile.tar.gz -k mysecretpassword
tar -xf whichfile.tar.gz
Last year I described all steps to get running with ActiveMQ embedded in Glassfish:
Some updates :
The current version of ActiveMQ is 5.7.0. Here some updated download links
activemq-all-5.7.0.jar (from the main package)
Note: you need to re-deploy the resource adapter with version 5.7 and check all connector settings.
It works fine with Glassfish 18.104.22.168 and Java JDK 1.7.07
I had issues with the firewall due to fact ActiveQM uses a fix registration port for JMX but dynamic ports for the communication port. The web-console was not accessible.
“Exception occurred while processing this request, check the log for more information!”
[#|2012-10-18T07:42:09.249+0000|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=73;_ThreadName=Thread-2;|StandardWrapperValve[jsp]: PWC1406: Servlet.service() for servlet jsp threw exception java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
There are ways to configure this for a standalone ActiveMQ instance with the parameters connectorPort and rmiServerPort but I didnt find out yet how to do this with the embedded version.
As a workaround I changed this setting -Djava.rmi.server.hostname from my hostname to localhost.
or ‘Saving totally the most while using EC2′
We use a couple of EC2 servers which are not permanently running, rather on user-demand only. Without wasting money for elastic ip addresses (you are charged while they are NOT attached), we make use of the random public IP provided by AWS and update our Dyndns addess for this server.
- Create a DynDNS account if you dont have one
- Create a hostname (eg. sample.mydomain.net)
- Install Inadyn
sudo apt-get install inadyn (for Ubuntu or debian)
- Add this line to a start-up script
inadyn –username myuser –password mypwd –iterations 1 –alias sample.mydomain.net
( with iterations the command is executed only once)
inadyn makes use of http://checkip.dyndns.com/ to retrieve the ip address
On top of it the server switches off automatically at nighttime (see blog entry) and the user uses a little web frontend to start the server again on his/her own.