Convert Palm Calendar to Google

I chose to use Google for my calendar. I already use Google for some of my mail and I sync contacts with Thunderbird using Zindus. Using Google means that my information is available on any platform. Google provides a nice sync interface for the iPod touch.

One annoying thing about the Google calendar integration with the iPod Touch is the lack of the ability to choose the color of the calendar. You need to choose to sync one calendar, then if the calendar color isn’t what you want, tell google to not sync the calendar, wait 10 minutes, then tell it to sync again. Repeat until you get the color you want.

I started writing a python program to do the calendar conversion as jppy gave me acces to the Palm database and Google has a python version of their API. However I found the python API to be lagging behind the Java API in features supported, such as birthday and anniversary information. So I decided to use Java to interact with Google. In initial testing this worked well, however I needed a way to access my Palm database from Java.

Instead of direct access I choose to write a C++ program that read the Palm database using the pilot link library and then output using Google protocol buffers. This kept me from having to rewrite the code to parse the palm database and give me a format that I could read in most any language. The C++ program is available on github as palm-export. The Java program for importing into Google calendar is also on github as calendar-import.

As of this writing I can import all calendar information except recurrence exceptions. This is when you have a repeating event and have chosen to cancel some of the events in the series. Google supports this concept and I have even gotten it to work for some events. However since I haven’t gotten it to work for all and I can’t tell when it’s not working, I came up with a workaround. Instead of creating the exception I create a new single event for the exception with “EXCEPTION:” prepended to the event name. This way I can recognize when one is canceled and I can just cancel the single event in Google, which works fine.

After the import I found a few of my appointments off by an hour. I suspect some timezones on recurring events didn’t get in correctly, so you’ll want to check appointments that are created with timezones in your Palm.

A couple of things to note. The iTouch doesn’t support all repeat types nor all alarm durations that the Palm does, however Google does. What this means is that these will show up as “Custom” in the iTouch and cannot be entered directly. So when you want to use an alarm or repeat type that the iTouch doesn’t support you’ll need to use the Google web interface to the calendar and then wait for the data to sync, which is pretty fast.

Recently (2/17/2011) I’ve noted that Google sometimes has trouble loading all of my calendars so they will disappear from the iTouch and then come back later. I’m hoping this is just a temporary situation and will be corrected in the future.

Replacing my Palm

I have been using a Palm as my PDA for a number of years. With the decline of the Palm platform I have been looking for a replacement PDA. The smartphones aren’t an answer for me as I really don’t want to pay for a data plan. I was looking at the Archos 43, however I haven’t been able to get my hands on one to see how well it works. In particular if the resistive touchscreen is reasonable. Furthermore Best Buy removed the Archos 43 from their website after listing it for 2 months. So I broke down and bought and iPod Touch.

I run Linux at home, so a major concern is how to sync data. In this post and following posts I will describe the tools that I’ve chosen to use and why; as well as how I imported my old data. Hopefully this will help others that are considering making such a change or looking to convert from one PDA to another or just want to know what tools others are using.

I also wanted to make sure that all of my information is available offline as well like my Palm. So I have mapped each of my Palm apps to the apps on my iTouch and then either web applications or a desktop application to view the data on my Linux computer.

  • Todo: Appigo todo backed in Toodledo to have an online backup. This is also a handy way to edit a bunch of todo items. Note that the free account doesn’t support the hierarchies of the Appigo application, but at least the data is there. To import data here I just manually entered the todo items.
  • Memo: Appigo notebook backed in Toodledo. I tried Evernote, but it was overkill for my needs and I wasn’t able to easily import my information from my Palm. To import data I just copied and pasted from jPilot into Toodledo. 3/20 update: I’ve switched to PlainText for my memos app. It’s really nice because it stores each memo as a text file in Dropbox and then it can be easily edited on my desktop.
  • pFuel: Gas Cubby is a nice application for keeping track of gas mileage and maintenance reminders. It also integrates very well with Appigo’s todo application so that maintenance reminders can be turned into todo items. Importing data here was just a matter of exporting from pFuel through the palm memos application and then massaging the CSV file to match what Gas Cubby wants for their input format.
  • plucker (offline web pages): See this post for details.
  • MyBible: Laridian has made MyBible for the Palm and PocketBible for the iPod Touch (and other IOS devices). I was able to transfer all of my Bibles easily. This worked out very well.
  • Keyring: MyKeePassmore information on this in a future post
  • Titrax: HoursTracker is a nice application for keeping track of time on projects.
  • Dropbox: I didn’t have this on my Palm, but it’s pretty handy for keeping photos as the photo album application won’t let me create albums.
  • Podcasts: Podcaster. I tried iTunes, but you can’t subscribe to a podcast on the device. Podcaster allows me to do this and has a nice refresh feature to grab all of my latest podcasts.
  • Calendar: Google Calendarmore details in a later post
  • Contacts: Google Contactsmore details in a later post

Setting up Neatx

I’ve been using the free version of the NX server from nomachine to access my Linux hosts and it’s been working really well. I’ve set it up at work to allowed shared access to our compute server, however we’ve run into an issue here. The free version limits how many people can use the server. Now we really don’t need the support and this isn’t like a big production shop where we need a rock solid version, so I started looking for open source alternatives. I found FreeNX, but development there seems to have paused for now. I then found Neatx. It’s hosted on google code and appears to be a little rough around the edges, but people claim it works.

There isn’t an installer, but I’m a developer, so I figured this shouldn’t be too bad. Turns out that after about 2 hours this evening I was able to get it going. Here are the steps that I followed. Note that I’m doing this on opensuse 11.3. The instructions should work elsewhere though.

  1. Install nxclient, nxnode, and nxserver from nomachine’s website. I grabbed the RPMs for opensuse and they installed just fine. I had originally done this because these were the binaries I was using, as it turns out I need some libraries from there for neatx to run properly.
  2. Copy the libraries off to the side to use later. If someone knows how to get around this step, I’d be very happy.
    • sudo cp -r /usr/NX/lib /usr/local/nx-lib
  3. Remove nxclient, nxnode, and nxserver. We don’t need them now.
  4. mkdir nx
  5. cd nx
  6. Download the following source files from nomachine’s opensource site. My downloads were version 3.4.0, but there may be a newer version out when you read this.
    • nxagent-3.4.0-11.tar.gz
    • nxauth-3.4.0-3.tar.gz
    • nxcomp-3.4.0-7.tar.gz
    • nxcompext-3.4.0-1.tar.gz
    • nxcompshad-3.4.0-3.tar.gz
    • nxproxy-3.4.0-2.tar.gz
    • nx-X11-3.4.0-4.tar.gz
  7. Untar the sources: for i in *.tar.gz; do tar -xzf $i; done
  8. cd nx-X11
  9. make World
  10. sudo cp programs/Xserver/nxagent /usr/local/bin/nxagent.real
  11. Now here is where we use the libraries. I tried finding all of the libraries in the sources that I built, but they weren’t enough, something was still missing. So you need to create /usr/local/bin/nxagent as a script with the following content:
    • #!/bin/bash
    • export LD_LIBRARY_PATH=/usr/local/nx-lib
    • exec /usr/local/bin/nxagent.real “$@”
  12. Download neatx from svn. Note that there aren’t any packages when I did this.
    • svn co http://neatx.googlecode.com/svn/trunk/neatx neatx-read-only
  13. Follow the instructions in neatx-read-only/neatx/INSTALL
  14. Fix the instructions on nx’s home dir
    • sudo chown -R nx ~nx
  15. Edit /usr/local/etc/neatx.conf to match your system. Some paths will likely need to be changed.

You should now have neatx setup. Connect to it from any NX client and give it a try. The default neatx.conf has the loglevel set to debug. Leave this on until you’ve successfully connected, then change it to info to keep your logs from getting real full. I spent a lot of time looking through the output of the debug logs for errors in my setup. Neatx is written in python, although you don’t need to know the language to be able to debug most of the errors.

Tips for brother printers

Brother makes great, inexpensive laser printers. However the technology that they use to determine when you need new toner is a little too conservative. I’ve found that it states that I need more toner when there is still quite a bit of toner left. Unfortunately at this point the printer will refuse to print anymore pages unless you replace the toner, or make it believe there is more toner. Being the frugal person that I am, I like to print all the way to the end of my toner. Here’s what I’ve done to be able to print all of the way to the end of a toner cartridge.

If you have a black and white laser printer, then you need some black electrical tape. Other tape may work, but this is what I’ve used. You pull out the toner cartridge and note that on each end of the cartridge you’ll see a round, clear plastic window. The printer shoots some light down this and if the light reaches the other end, you’re told that you need more toner. If you put a piece of black tape across these windows then the printer believes that the toner cartridge is still full. Granted now the only way to know that you’re low on toner or out is that things don’t print so well. However you are able to print to the end of the toner cartridge.

If you have a color printer, the process is a little different. I purchased a HL-4040CDN and found that the black tape trick didn’t work. Instead I dug up these instructions at fix your own printer.

  1. Open the front cover of the printer
  2. Press and hold the cancel button
  3. Press the reprint button while still holding cancel
  4. Here is the reset menu
  5. Go to the appropriate cartridge on the menu and reset it and you’re done!

FYI… Pressing the “Go” button and the up arrow gives you the parts life reset menu (drum, laser, fuser, etc.).

Don’t delay upgrades

I usually try and upgrade my computers fairly quickly once a new operating system or a new version of an application comes out, even if there aren’t particularly new features that I’m looking for. However sometimes I’ve been a little slow to upgrade some servers because I’m too busy with other stuff or it’s too hard to schedule downtime. I was reminded this week why it’s a good idea to do upgrades sooner rather than later.

I had some computers that I had gotten behind on upgrading the operating system by about 1 year. The security patches were applied, but there was a newer version of the OS and I just didn’t have the time to take care of it. Well, since then I moved out of that job, but still depended on the server. Since then it’s up to someone else to upgrade this system and they’re much like me, very busy with other things. So time goes on and now this server is 3 years behind on the OS upgrade and there are some major changes in the OS.

Now it’s time to replace the hardware of the machine. Since it’s a Linux machine the standard answer is just move the drives and keep going. I suspected there might be problems, so I left my number with my replacement. He started the replacement and well, it didn’t go smoothly. As it turns out the new hardware wasn’t quite supported by the older OS, such that the system would partially boot, but not completely. So we ended up doing a full upgrade across 3 minor and 1 major versions of the OS and then fixing up all of the little things that broke along the way.

In the end this probably took longer than it would have to do along the way because the system configuration would have been fresher in our minds. Plus the changes wouldn’t have been so drastic and things would likely have migrated much easier.

So remember to make time to upgrade your systems right away.

Migrating bacula from MySQL to PostgreSQL

So I’ve been looking to migrate my bacula installation from MySQL to PostgreSQL. Personally I like PostgreSQL better and the claims on the bacula-users list were that it’s faster. So I did a bunch of reading, and then testing the database conversion and finally have made it through the process. Here is the results of how to do it. The system that I did the migration on is an OpenSUSE 11.2 system.

I first upgraded my install from bacula 3.0.3 to 5.0.0, still using MySQL. This was a pretty straightforward process. A little different because the packing of the RPMS changed between 3.0.3 and 5.x.

Backup my existing config files and the database.

tar -czf /root/bacula-backup.tar.gz /etc/bacula
mysqldump -u bacula -ppassword bacula > bacula-3.0-mysql.sql

Remove the old RPMs

zypper remove bacula bacula-bat bacula-updatedb bacula-server

Install the new RPMs

zypper install bacula-console bacula-console-bat bacula-director-mysql bacula-storage-mysql bacula-client bacula-director-mysql

Update the database

/usr/lib/bacula/update_mysql_tables -u bacula -ppassword bacula

The next step was the hard one, converting the database. I used a post from the bacula-users list to come up with the appropriate mysqldump line. I then repeatedly created the postgreSQL database and tried to import the dump until it imported without serious errors. In the end this is the pipeline that created a good dump:

mysqldump -t -n -c --compatible=postgresql --skip-quote-names --skip-opt --disable-keys --lock-tables -u bacula -ppassword bacula \
  | grep -v "INSERT INTO Status" \
  | sed -e 's/0000-00-00 00:00:00/1970-01-01 00:00:00/g' \
  | sed -e 's/\\0//' > fixed-bacula-backup.sql

The mysqldump line is pretty much what was in the mailing list post, except that I did all of the tables at once. The grep is to get rid of inserts into the status table. I was having issues with duplicate keys and such and the bacula-users list assured me that this table is created by the make_postgresql_tables script. The first sed line is to fix some bad dates. MySQL allows a date of all zeros, PostgreSQL doesn’t, so I just bumped the 0 dates to the beginning of the unix epoch. The second sed line removes the extra null characters that showed up on all of the inserts into the log table. I’m not sure what caused these, but PostgreSQL doesn’t like to import them and this made it much happier.

I then setup a .pgpass file in root’s home directory so that I could secure the postgreSQL database with a password and not put it in my bacula config files. You can learn about the pgpass file in the postgreSQL documentation.

Next it’s just a matter of creating the PostgreSQL tables as the postgres user (or some other user with postgresql superuser privileges)

./create_postgresql_database
./make_postgresql_tables
./grant_postgresql_privileges

And then loading in the data. This load took a little over an hour and a half on my system, so be prepared to wait a bit.

psql -Ubacula bacula < fixed-bacula-backup.sql

Now one needs to reset the sequences that postgreSQL uses to autocreate ids. I started with the instructions in the bacula manual, but needed to add a couple of missing sequences.

SELECT SETVAL('basefiles_baseid_seq', (SELECT MAX(baseid) FROM basefiles));
SELECT SETVAL('client_clientid_seq', (SELECT MAX(clientid) FROM client));
SELECT SETVAL('file_fileid_seq', (SELECT MAX(fileid) FROM file));
SELECT SETVAL('filename_filenameid_seq', (SELECT MAX(filenameid) FROM filename));
SELECT SETVAL('fileset_filesetid_seq', (SELECT MAX(filesetid) FROM fileset));
SELECT SETVAL('job_jobid_seq', (SELECT MAX(jobid) FROM job));
SELECT SETVAL('jobmedia_jobmediaid_seq', (SELECT MAX(jobmediaid) FROM jobmedia));
SELECT SETVAL('media_mediaid_seq', (SELECT MAX(mediaid) FROM media));
SELECT SETVAL('path_pathid_seq', (SELECT MAX(pathid) FROM path));
SELECT SETVAL('basefiles_baseid_seq', (SELECT MAX(baseid) FROM basefiles));
SELECT SETVAL('client_clientid_seq', (SELECT MAX(clientid) FROM client));
SELECT SETVAL('file_fileid_seq', (SELECT MAX(fileid) FROM file));
SELECT SETVAL('filename_filenameid_seq', (SELECT MAX(filenameid) FROM filename));
SELECT SETVAL('fileset_filesetid_seq', (SELECT MAX(filesetid) FROM fileset));
SELECT SETVAL('job_jobid_seq', (SELECT MAX(jobid) FROM job));
SELECT SETVAL('jobmedia_jobmediaid_seq', (SELECT MAX(jobmediaid) FROM jobmedia));
SELECT SETVAL('media_mediaid_seq', (SELECT MAX(mediaid) FROM media));
SELECT SETVAL('path_pathid_seq', (SELECT MAX(pathid) FROM path));
SELECT SETVAL('pool_poolid_seq', (SELECT MAX(poolid) FROM pool));

Updates I needed to add:

SELECT SETVAL('device_deviceid_seq', (SELECT MAX(deviceid) FROM device));
SELECT SETVAL('location_locationid_seq', (SELECT MAX(locationid) FROM location));
SELECT SETVAL('locationlog_loclogid_seq', (SELECT MAX(loclogid) FROM locationlog));
SELECT SETVAL('log_logid_seq', (SELECT MAX(logid) FROM log));
SELECT SETVAL('mediatype_mediatypeid_seq', (SELECT MAX(mediatypeid) FROM mediatype));
SELECT SETVAL('storage_storageid_seq', (SELECT MAX(storageid) FROM storage));

After that I needed to modify the Catalog section in my bacula-dir.conf file to use localhost for the “DB Address”, remove the mysql socket reference and remove the password reference.

I also needed to modify the backup catalog command to be this (all on one line):

RunBeforeJob = "/usr/lib/bacula/make_catalog_backup bacula bacula \"\" localhost"

Computer companies being cheap can be annoying

I have a 15″ Macbook Pro unibody for work. Recently I was looking at the hard drive specs as I needed to upgrade another users laptop. Turns out that the SATA controller is capable of 3Gbps, however the drive from Apple is only capable of 1.5Gbps. What’s really annoying about this is that I specifically requested a 7200RPM drive from Apple to get some extra performance. Granted that the interface usually isn’t the bottleneck here, but still it’s annoying.

Don’t change 2 things at once

So I’m working on a project that has a number of components to it. In particular the data is all stored in a MySQL database. For various reasons we wanted to convert this to a PostgreSQL database. So off I went working on a branch to make the changes and test the system. Meanwhile other parts of the system are changing as well, in particular the size of the input data. When it comes time to merge I get everything setup and then merge the changes in and all the tests pass, so I commit.

Then we notice that the nightly performance run on the continuous integration server is really slow, taking 2 hours instead of 15 minutes. We had noticed there being some slowness on logins  before, but now the logins were slow and the software being tested was really slow. So we go about testing the CI server and find that openSUSE has been kind enough to keep track of the MAC address from the original system (we had installed on a different drive and chassis and then moved to this identical chassis). This caused the IPv6 link local address of this machine to match that of the previous system, which happened to be on the same network. This is bad, so we changed the settings back to the right MAC address and things were better, but still slow. So we decided to reinstall the OS, there are only a couple of directories of data to save, so no big deal.

After the reinstall, logins are faster, but the performance test is still slow. So we blame this on the changes to the input data. So over the next month or so we optimize the handling of the input data and find ways to reduce the input data some, but not back to the original size. The performance test is better, but still not back where it should be, we’re down to about 1.5 hours now.

Other tasks kept me busy, so I didn’t get back to this for another couple of weeks. At that point I’m running the performance test on another development machine and it’s running in 15 minutes! This is great! But it’s still running slow on the CI server. What’s different? So I start checking versions of software and all and everything matches up. What I do notice is differences in the drives. So I get another drive installed in the CI server so I can test different configurations of drives for the PostgreSQL data directory. Here are the results:

XFS (logbufs=8): ~4 hours to finish
ext4: ~1 hour 50 minutes to finish
ext3: 15 minutes to finish
ext3 on LVM: 15 minutes to finish
reiserfs: ~1 hour 50 minutes
ext3 barrier=1: ~15 minutes
ext4 nobarrier: ~15 minutes
jfs: ~15 minutes

So as you can see, the filesystem really makes a difference. Turns out the development machine was using ext3 on LVM and the CI server was running ext4. After posting to the postgres-performance mailing list about this, it turns out that I either get speed or safety. With ext3, if the power goes out, I could have a corrupted database, with ext4 this isn’t likely to happen. Given that I’m doing research here and if the power goes out during a test we have a lot bigger problems, I switched to ext3 and left it at that.

Now if I had just changed one thing (the database), rather than the database, the input data and the CI server setup around the same time, I probably would have caught this much sooner. It also would help to have my development systems setup with not only the same software, but the same filesystems too.