It was a matter of time until Amazon AWS would react to the recent price reduction by Google. With effect of April 1st they cut prices for EC2 and S3 massively again (up to 65% on S3 and 40% on EC2). For now the customer is the winner, lets observe how it develops on the long term.
I was offline for quite a while because shifting from one continent to another. But now regular posts should be rolling in again.
I am running a couple of instances in pre-production requirement mode and changed from a standard EBS volume to a IOPS volume for the DB instance or the volume with the DB files. I could not identify a reasonable increase of performance, maybe a misconception that IOPS volumes will boost performance, rather provide a defined and consistent random access I/O throughput. I must admit I did not use a value higher than 1000.
Some recommended reading:
- Increasing EBS Performance (by Amazon)
- Benchmarking (by Amazon)
- Even Stranger than Expected: a Systematic Look at EC2 I/O (by Scalyr)
I decided to return to a standard ESB volume for my database as its performance did not benefit from the IOPS type (the DB is not overly busy too).
You cant change type and size of an EBS volume on the fly.
Here the steps to achieve the same: Continue reading
Using linux (Ubuntu) instances on Amazon EC2 is a quite safe thing to do, at least measured by the security provided by the platform (security groups, ACL, physical security,..). I recommend reading their security site here. At the end of the day the server is only as secure as you configure it, if you choose to open all ports running services with their default configurations and password settings, Amazon can’t help you.
When connecting to a Ubuntu server with ssh you need to provide the keyfile (somekeyfile.pem) that you can download when creating the key pair.
This 2048 bit key is required to login as regular ubuntu user. What I dislike is the fact that this user can sudo all, so once someone manage to get into you user account, he has root access too. I recommend to set a password for the ubuntu user and change the sudoers configuration.
Change the password for user ubuntu
Open the sudoers include file
sudo vi /etc/suderos.d/90-cloudimg-ubuntu or sudo vi /etc/sudoers
change last line from
ubuntu ALL=(ALL) NOPASSWD:ALL
ubuntu ALL=(ALL) ALL
Finally this feature is available and easy as the click of a button. While it was previously almost impossible and last year through snapshots only you can select any AMI and copy to another region. It makes my life much easier and I stop maintaining reference images for every region but can make use of one image only ! More info here.
Is it finally possible ? While the AMI import tool is long awaited for but only available for Windows, it is rather a big hazzle to transfer manually (see this) any other OS ( my last attempt in 2010).
Today Amazon announced the EBS Snapshot Copy Feature (across regions). The intention is certainly to allow easy migration of data to another region, as you can copy the snapshot, create a volume and attach it to an instance. I was curious to try if I can migrate my Ubuntu instance to another region and it worked. You can use both command-line as well the AWS web admin.
- Create a snapshot of a volume in your source region
- Continue reading
To say it upfront: Usually there is no need to run an Ubuntu server with a desktop in the cloud. Whatever you do on the desktop you can do in a terminal too (assuming you dont want to use GIMP in the cloud). Here a little summary to get you started with a Precise Pangolin desktop running in the cloud.
Security: We will not use VNC, but NX. VNC is not secure (though can be tunnelled through SSH) and it works by sending compressed bitmaps of the screen, which is slower and less accurate than a NX server (X Server calls, Unix/Linux only)
Requirements: Amazon AWS account
- Log into your AWS account
- Optional: Create a security group with port 22 inbound only
Last year I described all steps to get running with ActiveMQ embedded in Glassfish:
Some updates :
The current version of ActiveMQ is 5.7.0. Here some updated download links
activemq-all-5.7.0.jar (from the main package)
Note: you need to re-deploy the resource adapter with version 5.7 and check all connector settings.
It works fine with Glassfish 184.108.40.206 and Java JDK 1.7.07
I had issues with the firewall due to fact ActiveQM uses a fix registration port for JMX but dynamic ports for the communication port. The web-console was not accessible.
“Exception occurred while processing this request, check the log for more information!”
[#|2012-10-18T07:42:09.249+0000|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=73;_ThreadName=Thread-2;|StandardWrapperValve[jsp]: PWC1406: Servlet.service() for servlet jsp threw exception java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
There are ways to configure this for a standalone ActiveMQ instance with the parameters connectorPort and rmiServerPort but I didnt find out yet how to do this with the embedded version.
As a workaround I changed this setting -Djava.rmi.server.hostname from my hostname to localhost.
or ‘Saving totally the most while using EC2’
We use a couple of EC2 servers which are not permanently running, rather on user-demand only. Without wasting money for elastic ip addresses (you are charged while they are NOT attached), we make use of the random public IP provided by AWS and update our Dyndns addess for this server.
- Create a DynDNS account if you dont have one
- Create a hostname (eg. sample.mydomain.net)
- Install Inadyn
sudo apt-get install inadyn (for Ubuntu or debian)
- Add this line to a start-up script
inadyn –username myuser –password mypwd –iterations 1 –alias sample.mydomain.net
( with iterations the command is executed only once)
inadyn makes use of http://checkip.dyndns.com/ to retrieve the ip address
On top of it the server switches off automatically at nighttime (see blog entry) and the user uses a little web frontend to start the server again on his/her own.
About once a year I revisit (link) this topic again (usually when the plugin causes trouble). Now I get this signature error
AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID:..
The good news first:
The S3 plugin became mainstream, you can install it from the plugin page under Jenkins Administration | Plugin Manager. You dont need to build the plugin any longer by yourself and can skip the rest of this entry.
The long version:
It seems the error is caused by a ‘+’ sign in the access key troubling the encoding function used (see issue). The latest build (Sep 2012) should fix this problem.
If you want to build by yourself, you need to get the sourcecode from git and build the plugin file, beware as it requires Maven 3 now. Below instructions apply fro Ubuntu.
- sudo add-apt-repository http://ppa.launchpad.net/natecarlson/maven3/ubuntu
- sudo apt-get update
- apt-get install maven3
- Change to any folder and clone the plugin from https://github.com/jenkinsci/s3-plugin
git clone https://github.com/jenkinsci/s3-plugin.git
- After a while of downloading dependencies you should get a hpi file for Jenkins
The third part of the tutorial where I improve a few things. I will not walk through the complete code but highlight a few important points and give you the complete sourcecode at the end.
To recap, my requirements:
- I want to allow users in my company to start and stop instances on their own without them login to AWS console.
- Only specific instances are available to them.
- Avoid using elastic IP’s (you pay for them if they are not assigned)
- Make it configurable
The improvements in this version:
- Remove the hardcoded access keys and place them encrypted in a properties file.
- Only instances that are not protected can be started or stopped.
- Update DynDNS entries from the application
- Some cosmetic cleanup of the control panel