So I just purchased an external monitor for my home office. When I plugged it in to my Macbook Pro using a Mini-displayport to Displayport, the resolution was great (2560x1080), but it looked terrible. Everything was fuzzy, and the most annoying – black text had this white halo effect around it. I could still read everything, but it wasn’t pretty.
It talks about the OSX thinking it was plugged into a TV, and using the YCbCr color space rather than RGB. After running the script, following the instructions and restarting, the monitor is working beautifully.
Hopefully this helps someone else, and will serve as a reminder to me should I encounter it again.
Generating the site and pushing it to S3 is as easy as:
s3cmd sync . s3://blog.pas.net.au/
The source in case you are interested is on Github at:
I have seen many people on the interwebs using dd and /dev/zero to create empty files. This is great if the file is small, but for a 50Gb file, it simply takes too long, particularly on EC2. The solution? Truncate!
truncate -s 50G my-large-file
Boom – instant.
This is great for doing things like mounting /tmp on EC2 to the ephemeral storage so /tmp is not limited to 10Gb (or whatever your root image size is)
truncate -s 50G big-tmp
mount -o loop /mnt/big-tmp /tmp
Regular readers of this blog (currently 6 of you madmen) will notice I removed the “PAS Recommends” section linking to Rackspace Cloud. That is because I can simply no longer recommend them. It is a sad day.
They are wasting resources by creating pointless iPhone and Android applications, and sending out survey after survey, all while the Management Console continues to get worse and worse. Right now in fact, I get a “white screen” after logging in.
Amazon’s Web Services are kicking Rackspace Cloud’s arse by continuously improving and upgrading their services. AWS has it all, and is only getting better while Rackspace Cloud sits dormant. I cannot believe we still cannot create an image of a machine larger than 75Gb, and that backups fail silently if the size does increase.
RIP Rackspace Cloud. I had much higher hopes for you.
If all is going well, you should have a “project” directory with all of your files imported from SVN in it. This directory is also your Git clone. Now you can “push” these changes to your remote Git repository
git push origin master
Now make some changes to your working directory
echo"This is a change in Git" > git-change.txt
git add git-change.txt
git commit -m "Adding a new file"git push
We now have a change in Git that is not in SVN. To copy the change over to SVN we do a “dcommit”
git svn dcommit
Lets check out the SVN repository, and make a change in there
cd$HOMEsvn checkout file://$HOME/svn-repo/project/trunk svn-project
echo"Here is a change in SVN" > svn-change.txt
svn add svn-change.txt
svn commit -m "Adding a new change in SVN"
To get this change in Git, we need to “rebase”. This is not as scary as it sounds
git svn rebase
Horay! We now know how to make changes go both ways between SVN and Git.
For doing anything serious on Rackspace Cloud, you need to be able to use machines larger than 2Gb of RAM. Problem is, machines larger than 2Gb of RAM cannot be imaged (or backed up) – until now. About a month ago, they announced the ability to snapshot a machine into Cloud Files. Today I decided to take it for a spin and hit Snag number one: there was no way to do an image on my new 16Gb machine. After talking talking to one person at Rackspace Cloud, I was none the wiser – Snag two was the lack of training for their support staff. After asking for the supervisor, he created a ticket for the approval for the terms (letting me know I would be charged for the storage in Cloud Files) and off I went. Until I hit Snag 3 – the “Images” tab for the server details still showed no way of performing an image. I was told to use the “My Server Images” and HORAY I could make an image of my 16Gb machine.
Finally the feature that prevented my company from using Rackspace Cloud, and instead using AWS, was fixed! Congratulations to the Rackspace Cloud team.
Another thing I learnt today is that there are two data centers for Rackspace Cloud machines – DFW and ORD. The first server you provision gets assigned to one of the two data centers, and which ever one it gets put into is the data center that all of your other servers will be put into as well. So if the first server gets put into DFW, then all of the other ones you create will be. That is until you delete all of your servers. Then once again, the first server can get put into any data center.
I am hoping that in the future, we will have a choice about which data center a server is provisioned in, particularly since it will help for geographical distribution.