Crowdfunding Projects I back (2)

Another small project I backed is the Gamebuino, an Arduino based retro game console. Simple concept to pick up basic game programming with this 8bit gadget that reminds me of the Gameboy that Nintendo launched in April 1989. Amazing, the one-man project managed to gather 1.000% of the funding he asked for. The device was funded for 25$ as early bird backer.

Hardware History Lane – Casio Cassiopeia

Over the years we spend a lot of money into gadgets and electronics, only to see its value dropping to zero and being out-dated the moment you open the box the first time. While doing some spring cleaning I unearthed the  Casio Cassiopeia that I bought in 2001 (for ~800.- DM) , surprisingly it still charges and works.

Casio Cassiopeia

Casio Cassiopeia EM-500G

This is EM-500G, a slimmed down version of the E-125. Some specs:

  • CPU: NEC VR4122 MIPS (150 MHz)
  • Memory: 16 MB ROM
  • Display: LCD, 240 × 320 Pixel, 65536 Farben
  • Interfaces: Seriell/USB and IrDA
  • MultimediaCard Slot
  • Windows Poecket PC 3.0

Compared to todays mobile phone and tablet hardware seems like nothing (vs. eg. a dual-core 1.7 GHz and 2GB of a Samsung Sx phone).
I am just wondering what we gain in 13 year with CPU speed times 10 and memory times 100 from a user point of view ? Yes, we have Android and iOS with 1.000.000 applications to download, 3D Games on HD screens, music and videos (the Cassiopeia  can handle that too to some extent), but the basic features are still the same. I used the Cassiopeia that time to remote dial in into Unix servers, using a Siemens S35 as modem.

Casio Cassiopeia EM-500G


Crowdfunding Projects I back

Crowdfunding becomes more and more popular with many successful projects coming out of the various platforms in the web (most Kickstarter and IndieGoGo). I like the idea of independent smart people coming up with an idea and let a product or concept take off without backing by a huge MNC (though these companies might buy a crowfunded project and turn off supporters, but that is another story). I believe crowdfunding can be a source of genuine products which are not made solely to hog patents and increase shareholder values.

Doing panoramic and spherical photography for more than 15 years now, I am excited about the new ideas, technologies and products coming up.
Sometimes you should follow your ideas or visions, I did some basic research for an own panorama rig similar to the projects below back in 2007 (link), but did not really complete the project and with the requirement to export the images and stitch them in the PC it was not very practical. In 2007 I did not see the option to stitch with hardware on-board.

One already successful funded project is the Panono Camera Ball (a camera in the shape of a ball thrown into the air to snap an full spherical images with 36 small cameras built-in).


2 new projects that are still in the funding phase I back with 300 U$ each. Both try to create 360 degree images and videos

The CentrCam
At the time of writing the project still have to fund another 360.000 U$ in 6 days, seems to become unlikely being successful.

The 360Cam
which is already 280% founded.

Lets see who makes the race (they are not competing I guess) but is a bit strange that the 360Cam has a target of 150.000 $ only, with a much richer feature and quality list compared to the 900.000$ target of the CentrCam that would output video in a lower quality and smaller resolution. Anyway lets wait for the funding results, I am happy to support both (at least I add both to my panorama collection).

FB aquires WhatsApp – Bye and Thanks for the Fish

WhatsApp known for its massive security issues, still used by millions of people as a free replacement of SMS and MMS, was acquired by FB, one (maybe the) biggest data harvester in the internet. I dont use FB, the acquisition is a reason to finally move on to another more secure communication tool: Threema  (Made in Switzerland app with end-to-end encryption). Hope they wont sell privacy for money. Please help to spread the word.

It is NOT free, but is time to understand FREE comes at a price !

Noise in LinkedIn versus Stackoverflow

The internet is huge dump ground full of knowledge and knowhow sharing. A market and meeting place. Given the trillions of websites one must be selective where you spend your time. Certainly StackOverflow is good investement (both to query and to answer).

You ever noticed that you pretty much dont see any jam postings on StackOverflow ?
I also joined LinkedIn (already a few years back) and I still dont understand how many so-called professional groups get flooded with rubbish postings, usually offering jobs where you earn a bomb by filling out surveys and other nonsense. Not sure why either LinkedIn is not capable to sort this out or the group owners let anyone in even without any profile.

For the fun of it (internet forensic for starters)I did a little background check on one of this postings. Quite often posted by someone without public profile, always a woman with a attractive looking profile picture and some fancy names (Cindy H., Evelyn P.,..). The URL in the posting are usually dating sites or other drive surf-by virus sites. You can backtrack an image and check with TinEye where the image is used in the web. I did it with a person called Jessica P. and put the image link into TinEye. It leads to Ukrainian Dating Site. Her name is Irina from Yalta and she is interested in dancing, swimming, shaping, aerobics, travelling. What would she do in an IT forum ? Supposedly she work as a translator. I suspect even this is fake. Anyway you can do the same with these 2..3 simple steps. Continue reading

PostgreSQL Replication Express Setup

The system I work on we deploy almost solely on the Amazon AWS platform. Even I try to design the architecture in a way not to be locked-in too much into Amazon, I make use of the Amazon tools and products as much as possible (EC2, VPC, S3, SNS). PostgreSQL is our reference DB and the only DB product in production environments, still we run dedicated instances with PostgreSQL. I am quite delighted about AWS recent offering staring RDS with PostgreSQL. While is is still in BETA and I did not started yet with a conclusive test and migration plan, I need to maintain our existing instances.

There are plenty of books and tutorials about setting up PostgreSQL replication with on-board tools, without going into the details I share the express setup in this tutorial based on Streaming Replication which is part of PostgreSQL since version 9.0. I highly recommend to review the parameters and settings from the below tutorial as your project might have different requirements.




  • The tutorial is based on PostgreSQL 9.2 running on Ubuntu Server
  • Paths and settings are all the PostgreSQL defaults.
  • This is async setup, the master will not wait for feedback from the salve and continue to work even the slave is not available


  • 2 Server running the same PostgreSQL version (9.0+)
  • Backup your data or use a sandbox environment.
  • In the tutorial I refer to
    MASTER (ip: and
    SLAVE (ip:



  • Create a replicator user
    sudo -u postgres psql -c "CREATE USER replicator REPLICATION LOGIN ENCRYPTED PASSWORD 'mypassword';"
  • Add the slave ip to /etc/postgresql/9.2/main/pg_hba.conf
    host    replication     all         trust
  • Modify parameters in /etc/postgresql/9.2/main/postgresql.conf
    wal_level = hot_standby
    max_wal_senders = 3
    checkpoint_segments = 3
    wal_keep_segments = 3

    Review these parameters and set them up according to your requirements

  • Start the PostgreSQL instance
    service postgresql start


  • Modify parameters in /etc/postgresql/9.2/main/postgresql.conf
    wal_level = hot_standby
    max_wal_senders = 3
    checkpoint_segments = 3
    wal_keep_segments = 3
    hot_standby = on
  • Stop the PostgreSQL instance
    service postgresql stop
  • Clean up the old data directory
    sudo -u postgres rm -rf /var/lib/postgresql/9.2/main
  • Copy the database from the master with pg_basebackup
    sudo -u postgres pg_basebackup -h -D /var/lib/postgresql/9.2/main -U replicator -v -P -x

    You can see the backup progress and it should result in something likes

    root@:/var/lib/postgresql/9.2/main# transaction log start point: 41/7D000020
    31524952/31524952 kB (100%), 2/2 tablespaces (/var/lib/postgresql/9.2/main/PG_9.)
    transaction log end point: 41/7D0002A8
    pg_basebackup: base backup completed
  • Create a recovery configuration file /var/lib/postgresql/9.2/main/recovery.conf
    standby_mode = 'on'
    primary_conninfo = 'host= port=5432 user=replicator password=mypassword sslmode=require'
    trigger_file = '/tmp/postgresql.trigger'
  • Start the PostgreSQL instance
    service postgresql start

    Check the pglogs.

Test the replication

  • Open any table with pgadmin on the master and apply a change, it should be reflected in the slave within short time.
  • Try to change data on the slave, it will fail due to the hot-standby mode

Monitor the replication

  • The master instance will not alert you when the replication is down. You can check by yourself or create a little cronjob to do it for you with this sql statement.
    sudo -u postgres psql -x -c "select * from pg_stat_replication;"

    You get the status back (if it is running, otherwise the statement will return ‘no rows’).

    Check replication

    Check replication