So I just purchased an external monitor for my home office. When I plugged it in to my Macbook Pro using a Mini-displayport to Displayport, the resolution was great (2560x1080), but it looked terrible. Everything was fuzzy, and the most annoying – black text had this white halo effect around it. I could still read everything, but it wasn’t pretty.
It talks about the OSX thinking it was plugged into a TV, and using the YCbCr color space rather than RGB. After running the script, following the instructions and restarting, the monitor is working beautifully.
Hopefully this helps someone else, and will serve as a reminder to me should I encounter it again.
I only recently discovered a quick and easy way to start a slideshow on OSX from a list of images in a folder. From Finder, select all the images you want to display (Command-A for all) and then:
Option-Command-Y
Not sure why this isn’t more obvious as it is really handy. Enjoy.
Well, I was up until the wee hours of the night last night, but I finally converted my blog to Octopress and have it hosted on AWS S3. I must say, I really like Octopress and am becoming a big fan of generated static websites (such as Github pages), where anything complicated is handled by JavaScript. The only unfortunate thing is that I couldn’t bring the comments over from Wordpress and put them into Disqus.
Generating the site and pushing it to S3 is as easy as:
123
rake generate
cd public
s3cmd sync . s3://blog.pas.net.au/
The source in case you are interested is on Github at:
I have seen many people on the interwebs using dd and /dev/zero to create empty files. This is great if the file is small, but for a 50Gb file, it simply takes too long, particularly on EC2. The solution? Truncate!
1
truncate -s 50G my-large-file
Boom – instant.
This is great for doing things like mounting /tmp on EC2 to the ephemeral storage so /tmp is not limited to 10Gb (or whatever your root image size is)
1234
cd /mnt
truncate -s 50G big-tmp
mkfs.xfs /mnt/big-tmp
mount -o loop /mnt/big-tmp /tmp
Regular readers of this blog (currently 6 of you madmen) will notice I removed the “PAS Recommends” section linking to Rackspace Cloud. That is because I can simply no longer recommend them. It is a sad day.
They are wasting resources by creating pointless iPhone and Android applications, and sending out survey after survey, all while the Management Console continues to get worse and worse. Right now in fact, I get a “white screen” after logging in.
Amazon’s Web Services are kicking Rackspace Cloud’s arse by continuously improving and upgrading their services. AWS has it all, and is only getting better while Rackspace Cloud sits dormant. I cannot believe we still cannot create an image of a machine larger than 75Gb, and that backups fail silently if the size does increase.
RIP Rackspace Cloud. I had much higher hopes for you.
I have been using Subversion for a long time, and am relatively new to Git. This post is a little tutorial of what I have learnt getting Git and SVN to play nicely together, primarily using git-svn.
My goal is to maintain the code in the original SVN repository while transitioning the team to Git. This means changes to either repository get reflected into the other one.
First steps is to take the create an empty Git repository to import SVN into (this is your remote Git repository):
1234
cd$HOME/git-repo
mkdir project
cd project
git --bare init
Now clone the empty Git repository so you have a working directory, and the import the SVN repository into Git. The directory names need to match (in this case, they are both “project”):
If all is going well, you should have a “project” directory with all of your files imported from SVN in it. This directory is also your Git clone. Now you can “push” these changes to your remote Git repository
12
cd$HOME/project
git push origin master
Now make some changes to your working directory
1234
echo"This is a change in Git" > git-change.txt
git add git-change.txt
git commit -m "Adding a new file"git push
We now have a change in Git that is not in SVN. To copy the change over to SVN we do a “dcommit”
1
git svn dcommit
Lets check out the SVN repository, and make a change in there
123456
cd$HOMEsvn checkout file://$HOME/svn-repo/project/trunk svn-project
cd svn-project
echo"Here is a change in SVN" > svn-change.txt
svn add svn-change.txt
svn commit -m "Adding a new change in SVN"
To get this change in Git, we need to “rebase”. This is not as scary as it sounds
1234
cd$HOME/project
git pull
git svn rebase
git push
Horay! We now know how to make changes go both ways between SVN and Git.
For doing anything serious on Rackspace Cloud, you need to be able to use machines larger than 2Gb of RAM. Problem is, machines larger than 2Gb of RAM cannot be imaged (or backed up) – until now. About a month ago, they announced the ability to snapshot a machine into Cloud Files. Today I decided to take it for a spin and hit Snag number one: there was no way to do an image on my new 16Gb machine. After talking talking to one person at Rackspace Cloud, I was none the wiser – Snag two was the lack of training for their support staff. After asking for the supervisor, he created a ticket for the approval for the terms (letting me know I would be charged for the storage in Cloud Files) and off I went. Until I hit Snag 3 – the “Images” tab for the server details still showed no way of performing an image. I was told to use the “My Server Images” and HORAY I could make an image of my 16Gb machine.
Finally the feature that prevented my company from using Rackspace Cloud, and instead using AWS, was fixed! Congratulations to the Rackspace Cloud team.
Another thing I learnt today is that there are two data centers for Rackspace Cloud machines – DFW and ORD. The first server you provision gets assigned to one of the two data centers, and which ever one it gets put into is the data center that all of your other servers will be put into as well. So if the first server gets put into DFW, then all of the other ones you create will be. That is until you delete all of your servers. Then once again, the first server can get put into any data center.
I am hoping that in the future, we will have a choice about which data center a server is provisioned in, particularly since it will help for geographical distribution.