Sankasaurus

Just another tech blog – ranting since 2006

Getting rid of the white halo around black text on Mac OSX

So I just purchased an external monitor for my home office. When I plugged it in to my Macbook Pro using a Mini-displayport to Displayport, the resolution was great (2560x1080), but it looked terrible. Everything was fuzzy, and the most annoying – black text had this white halo effect around it. I could still read everything, but it wasn’t pretty.

After much searching, I found this post:

http://www.ireckon.net/2013/03/force-rgb-mode-in-mac-os-x-to-fix-the-picture-quality-of-an-external-monitor

It talks about the OSX thinking it was plugged into a TV, and using the YCbCr color space rather than RGB. After running the script, following the instructions and restarting, the monitor is working beautifully.

Hopefully this helps someone else, and will serve as a reminder to me should I encounter it again.

Quick slideshow of image files on OSX

I only recently discovered a quick and easy way to start a slideshow on OSX from a list of images in a folder. From Finder, select all the images you want to display (Command-A for all) and then:

Option-Command-Y

Not sure why this isn’t more obvious as it is really handy. Enjoy.

Conversion to Octopress complete

Well, I was up until the wee hours of the night last night, but I finally converted my blog to Octopress and have it hosted on AWS S3. I must say, I really like Octopress and am becoming a big fan of generated static websites (such as Github pages), where anything complicated is handled by JavaScript. The only unfortunate thing is that I couldn’t bring the comments over from Wordpress and put them into Disqus.

Generating the site and pushing it to S3 is as easy as:

1
2
3
rake generate
cd public
s3cmd sync . s3://blog.pas.net.au/

The source in case you are interested is on Github at:

https://github.com/pas256/blog

Thanks to Jeff Barr for introducting me to Octopress on his AWS Road Trip.

Looking forward to getting back into this – it has been too long without a good rant. :)

Creating large empty files quickly

I have seen many people on the interwebs using dd and /dev/zero to create empty files. This is great if the file is small, but for a 50Gb file, it simply takes too long, particularly on EC2. The solution? Truncate!

1
truncate -s 50G my-large-file

Boom – instant.

This is great for doing things like mounting /tmp on EC2 to the ephemeral storage so /tmp is not limited to 10Gb (or whatever your root image size is)

1
2
3
4
cd /mnt
truncate -s 50G big-tmp
mkfs.xfs /mnt/big-tmp
mount -o loop /mnt/big-tmp /tmp

Git for OSX

Hi have just discovered a wonderful GUI for Git on Mac OSX. It is called GitX (L) (which is different to GitX).

You can get it here: http://gitx.laullon.com/

I no longer recommend Rackspace Cloud

Regular readers of this blog (currently 6 of you madmen) will notice I removed the “PAS Recommends” section linking to Rackspace Cloud. That is because I can simply no longer recommend them. It is a sad day.

They are wasting resources by creating pointless iPhone and Android applications, and sending out survey after survey, all while the Management Console continues to get worse and worse. Right now in fact, I get a “white screen” after logging in.

Amazon’s Web Services are kicking Rackspace Cloud’s arse by continuously improving and upgrading their services. AWS has it all, and is only getting better while Rackspace Cloud sits dormant. I cannot believe we still cannot create an image of a machine larger than 75Gb, and that backups fail silently if the size does increase.

RIP Rackspace Cloud. I had much higher hopes for you.

Installing CDH3 on OS X

Installing Cloudera’s version of Hadoop on an OSX Macbook Pro is not difficult, if you get the steps right.

Go to: https://ccp.cloudera.com/display/SUPPORT/CDH3+Downloadable+Tarballs and download the Hadoop tarball.

We are going to run Hadoop in pseudo-distributed mode, which is nice in a dev environment.

Open up a Terminal window, and run:

1
2
3
tar xvzf ~/Downloads/hadoop-0.20.2-cdh3u1.tar.gz
cd hadoop-0.20.2-cdh3u1/conf
cp ../example-confs/conf.pseudo/* .

Now we need to edit 2 files so that Hadoop knows where to write it’s data. This is when you decide where to write it. I did:

1
mkdir -p ~/hadoop-data/cache/hadoop/dfs/name

Edit core-site.xml

1
2
3
4
<property>
   <name>hadoop.tmp.dir</name>
   <value>/Users/${user.name}/hadoop-data/cache/${user.name}</value>
</property>

Next, edit hdfs-site.xml

1
2
3
4
<property>
   <name>dfs.name.dir</name>
   <value>/Users/${user.name}/hadoop-data/cache/hadoop/dfs/name</value>
</property>

Finally, format HDFS, and start up the nodes:

1
2
3
cd ../bin
./hadoop namenode -format
./start-all.sh

If you are typing in your password a lot, try this (assuming you have your SSH keys set up):

1
2
cd ~/.ssh
cp id_rsa.pub authorized_keys

If you have upgraded to OS X Lion (v 10.7), then you might see this every time you do something:

1
2011-07-29 21:22:05.997 java[7690:1903] Unable to load realm info from SCDynamicStore

You can ignore it. It has something to do with Kerberos authentication (I think), but I don’t yet have a solution to getting rid of it.

Git and SVN working together

I have been using Subversion for a long time, and am relatively new to Git. This post is a little tutorial of what I have learnt getting Git and SVN to play nicely together, primarily using git-svn.

My goal is to maintain the code in the original SVN repository while transitioning the team to Git. This means changes to either repository get reflected into the other one.

First steps is to take the create an empty Git repository to import SVN into (this is your remote Git repository):

1
2
3
4
cd $HOME/git-repo
mkdir project
cd project
git --bare init

Now clone the empty Git repository so you have a working directory, and the import the SVN repository into Git. The directory names need to match (in this case, they are both “project”):

1
2
3
cd $HOME
git clone file://$HOME/git-repo/project
git svn clone -s file://$HOME/svn-repo/project

If all is going well, you should have a “project” directory with all of your files imported from SVN in it. This directory is also your Git clone. Now you can “push” these changes to your remote Git repository

1
2
cd $HOME/project
git push origin master

Now make some changes to your working directory

1
2
3
4
echo "This is a change in Git" > git-change.txt
git add git-change.txt
git commit -m "Adding a new file"
git push

We now have a change in Git that is not in SVN. To copy the change over to SVN we do a “dcommit”

1
git svn dcommit

Lets check out the SVN repository, and make a change in there

1
2
3
4
5
6
cd $HOME
svn checkout file://$HOME/svn-repo/project/trunk svn-project
cd svn-project
echo "Here is a change in SVN" > svn-change.txt
svn add svn-change.txt
svn commit -m "Adding a new change in SVN"

To get this change in Git, we need to “rebase”. This is not as scary as it sounds

1
2
3
4
cd $HOME/project
git pull
git svn rebase
git push

Horay! We now know how to make changes go both ways between SVN and Git.

Android Market links

For those of you like me that have totally missed the Publishing page, here is how to create a link to your Android application in the Android Market:

market://details?id=<packagename>

OR

http://market.android.com/details?id=<packagename>

So for Remembory, I have this:

market://details?id=com.gbott.remembory

The “old” way was to link to the search page by using this:

market://search?q=pname:<package>

…but the details page method saves the user a click, or a tough decision when there are two applications both called Remembory.

Rackspace Cloud images to Cloud Files

For doing anything serious on Rackspace Cloud, you need to be able to use machines larger than 2Gb of RAM. Problem is, machines larger than 2Gb of RAM cannot be imaged (or backed up) – until now. About a month ago, they announced the ability to snapshot a machine into Cloud Files. Today I decided to take it for a spin and hit Snag number one: there was no way to do an image on my new 16Gb machine. After talking talking to one person at Rackspace Cloud, I was none the wiser – Snag two was the lack of training for their support staff. After asking for the supervisor, he created a ticket for the approval for the terms (letting me know I would be charged for the storage in Cloud Files) and off I went. Until I hit Snag 3 – the “Images” tab for the server details still showed no way of performing an image. I was told to use the “My Server Images” and HORAY I could make an image of my 16Gb machine.

Finally the feature that prevented my company from using Rackspace Cloud, and instead using AWS, was fixed! Congratulations to the Rackspace Cloud team.

Another thing I learnt today is that there are two data centers for Rackspace Cloud machines – DFW and ORD. The first server you provision gets assigned to one of the two data centers, and which ever one it gets put into is the data center that all of your other servers will be put into as well. So if the first server gets put into DFW, then all of the other ones you create will be. That is until you delete all of your servers. Then once again, the first server can get put into any data center.

I am hoping that in the future, we will have a choice about which data center a server is provisioned in, particularly since it will help for geographical distribution.