After much searching, I found this post:
It talks about the OSX thinking it was plugged into a TV, and using the YCbCr color space rather than RGB. After running the script, following the instructions and restarting, the monitor is working beautifully.
Hopefully this helps someone else, and will serve as a reminder to me should I encounter it again.
]]>Command-A
for all) and then:
Option-Command-Y
Not sure why this isn’t more obvious as it is really handy. Enjoy.
]]>Generating the site and pushing it to S3 is as easy as:
1 2 3 |
|
The source in case you are interested is on Github at:
https://github.com/pas256/blog
Thanks to Jeff Barr for introducting me to Octopress on his AWS Road Trip.
Looking forward to getting back into this – it has been too long without a good rant. :)
]]>/dev/zero
to create empty files. This is great if the file is small, but for a 50Gb file, it simply takes too long, particularly on EC2. The solution? Truncate!
1
|
|
Boom – instant.
This is great for doing things like mounting /tmp on EC2 to the ephemeral storage so /tmp is not limited to 10Gb (or whatever your root image size is)
1 2 3 4 |
|
You can get it here: http://gitx.laullon.com/
]]>They are wasting resources by creating pointless iPhone and Android applications, and sending out survey after survey, all while the Management Console continues to get worse and worse. Right now in fact, I get a “white screen” after logging in.
Amazon’s Web Services are kicking Rackspace Cloud’s arse by continuously improving and upgrading their services. AWS has it all, and is only getting better while Rackspace Cloud sits dormant. I cannot believe we still cannot create an image of a machine larger than 75Gb, and that backups fail silently if the size does increase.
RIP Rackspace Cloud. I had much higher hopes for you.
]]>Go to: https://ccp.cloudera.com/display/SUPPORT/CDH3+Downloadable+Tarballs and download the Hadoop tarball.
We are going to run Hadoop in pseudo-distributed mode, which is nice in a dev environment.
Open up a Terminal window, and run:
1 2 3 |
|
Now we need to edit 2 files so that Hadoop knows where to write it’s data. This is when you decide where to write it. I did:
1
|
|
Edit core-site.xml
1 2 3 4 |
|
Next, edit hdfs-site.xml
1 2 3 4 |
|
Finally, format HDFS, and start up the nodes:
1 2 3 |
|
If you are typing in your password a lot, try this (assuming you have your SSH keys set up):
1 2 |
|
If you have upgraded to OS X Lion (v 10.7), then you might see this every time you do something:
1
|
|
You can ignore it. It has something to do with Kerberos authentication (I think), but I don’t yet have a solution to getting rid of it.
]]>My goal is to maintain the code in the original SVN repository while transitioning the team to Git. This means changes to either repository get reflected into the other one.
First steps is to take the create an empty Git repository to import SVN into (this is your remote Git repository):
1 2 3 4 |
|
Now clone the empty Git repository so you have a working directory, and the import the SVN repository into Git. The directory names need to match (in this case, they are both “project”):
1 2 3 |
|
If all is going well, you should have a “project” directory with all of your files imported from SVN in it. This directory is also your Git clone. Now you can “push” these changes to your remote Git repository
1 2 |
|
Now make some changes to your working directory
1 2 3 4 |
|
We now have a change in Git that is not in SVN. To copy the change over to SVN we do a “dcommit”
1
|
|
Lets check out the SVN repository, and make a change in there
1 2 3 4 5 6 |
|
To get this change in Git, we need to “rebase”. This is not as scary as it sounds
1 2 3 4 |
|
Horay! We now know how to make changes go both ways between SVN and Git.
]]>market://details?id=<packagename>
OR
http://market.android.com/details?id=<packagename>
So for Remembory, I have this:
market://details?id=com.gbott.remembory
The “old” way was to link to the search page by using this:
market://search?q=pname:<package>
…but the details page method saves the user a click, or a tough decision when there are two applications both called Remembory.
]]>Finally the feature that prevented my company from using Rackspace Cloud, and instead using AWS, was fixed! Congratulations to the Rackspace Cloud team.
Another thing I learnt today is that there are two data centers for Rackspace Cloud machines – DFW and ORD. The first server you provision gets assigned to one of the two data centers, and which ever one it gets put into is the data center that all of your other servers will be put into as well. So if the first server gets put into DFW, then all of the other ones you create will be. That is until you delete all of your servers. Then once again, the first server can get put into any data center.
I am hoping that in the future, we will have a choice about which data center a server is provisioned in, particularly since it will help for geographical distribution.
]]>http://code.google.com/p/hive-json-serde/
This SerDe (serializer/deserializer) will let you read JSON files as input for Hive tables. In the future, it will also support writing JSON data, but that is for another day.
Please let me know if you have any comments or questions about it.
]]>Step 2 would be public bug trackers for ALL of their systems, particularly the online ones such as Gmail, Docs, Sites, etc.
Step 3 would be Googlers actually paying attention to them, maybe even hiring a customer support team.
Fingers crossed!
]]>I would never buy a phone from Google. If something went wrong, I’m screwed! Its not like I can take it into a shop and get it fixed
How right she was. According to Slashdot, Google is facing a deluge of customer complaints about the Nexus One. Are you having problems? Use the hash tag #fixgoogle if you are on Twitter. All tweets with that tag will appear on fixgoogle.com when I get the site up and running.
]]>If you are using EC2, you will quickly find that if an instance is terminated, any data on that instance is gone – lost forever. At first, this seems like a terrible idea, but in fact, it encourages you to get into best practices, and discover the awesome benefits of EBS.
We have many instances running of different types. We have built a “custom” Debian AMI for each of the instance types we use (web, database, management, etc). If you were to launch an instance with one of these AMIs, you would not have a fully working system. That is because these AMIs have sym-links for important and/or dynamic data. For example, on the web AMI we have created, /etc/apache2
, /etc/php5/
and /var/www
are all sym-links. To where? A directory that an EBS volume is mounted to. That’s right, all of the web configuration and website code only lives in an EBS volume. It is simple enough to write a little script that creates a nightly Snapshots of each EBS volume.
Now for the power of this setup. Every time you want to bring up another instance of the same type (say, for horizontally scaling), you are in fact doing a restoration from backup. Take a Snapshot (your backup), create an EBS volume, attach it to the new instance, and make it live! This doesn’t just work for scaling, it works for bringing up staging servers that are mirrors of production or running experiments without affecting production.
We can even take it a step further! Those AMIs and Snapshots are all stored in S3 – data available to the whole Region. An instance and EBS volume exist in only 1 of the Availability Zones within that Region. You can use your backups to restore into a new Availability Zone which you can use to create a high-availability solution.
Happy scaling!
* I don’t know Joel personally – we have never met – but I do follow his work, like his company and LOVE Fogbugz!
]]>I also want to call upon all of the mobile application developers out there. Motally is running a mobile analytics contest called Trackappalooza. Here you can win a pass to MWC in Barcelona, or up to $15,000 just for tracking your Android, iPhone or Blackberry app. Motally is the mobile analytics powerhouse providing tracking capabilities for mobile websites and mobile applications.
Good luck!
]]>Enjoy!
]]>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
WTF? I understand that Tomcat needs some kind of Java, but this is ridiculous. It is installing ant, fonts, compilers and worst of all, the most evil Java ever.
Ubuntu has the sense to make Sun Java available, but even if you do have Sun Java installed, the above is true on Ubuntu.
For shame!
I’ll stick to downloading from java.sun.com and tomcat.apache.org.
]]>So what was my motivation for this?
Well the Rackspace Cloud Management Console is severely lacking in features. Things such as creating an image of a server (just like you can in AWS EC2), and sharing an IP address between servers (something you cannot do in EC2 – an IP address can only be attached to a single instance at a time). It is this second feature that I am most interested in because it means I can use a virtual IP address (floating IP) to create a HA (highly available) “cluster” of Tomcat servers. I plan on using keepalived to do the IP switching, and Apache with mod_proxy_balancer and mod_proxy_ajp to talk to multiple Tomcat servers. Without reading the poorly written API documention and learning that I could create a shared IP group, I would not have known this was possible.
This project is very much a work in progress, and what is in there now represents only about 8 hours of work. I welcome any feedback, and anyone else who wants to join.
]]>They say that it is direct competition to Microsoft, that it makes Linux less relevant… are they serious? Chrome OS is a non-announcement. There is a project that has existed for over 2 years called “cl33n”. From the creator:
Chrome OS is “Google Chrome running within a new windowing system on top of a Linux kernel.”
cl33n is “Mozilla Firefox running in a little-used windowing system on top of a Linux kernel.”
This “OS” is due to the released mid 2010. Is that how slowly things move inside Google? Why would it take them 12 months to create nothing more than cl33n?
What I am trying to say, is that Chrome OS is nothing new. Cl33n is not alone in this space either – other project like Webconverger share my view.
While on the subject of Google’s non-annoucements, did you hear that Gmail, Doc, etc are out of beta. Big news huh? So what is their excuse now for daily “Server error” dialogs?
]]>Now that I have won though, I believe this is not Nagios specific at all, and if I had bothered to learn about SELinux, this may have been obvious. Anyway, the error Nagios was giving me was:
Error: Could not stat() command file ‘/usr/local/nagios/var/rw/nagios.cmd’!
The external command file may be missing, Nagios may not be running, and/or Nagios may not be checking external commands.
An error occurred while attempting to commit your command for processing.
Return from whence you came
As you may have already guess, the solution has nothing to do with the location or permissions of the file, the file was not missing, Nagios was running, and Nagios was checking external commands. The final line of the message is great though, and I can only hope we start to see more old English in error messages.
The problem of course, was that SELinux was enabled and stopping this blatant security violation. You can check to see if SELinux is on by running:
1 2 |
|
If you got “Permissive” or “Disabled”, then this post is not for you. To see SELinux’s side of things, check out /var/log/messages
:
1 2 3 4 5 |
|
As you can see, SELinux is trying to give you a hint with that sealert bit, so you should take it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
That raw audit message is GOLD! There is some other information in there, but nothing about what the next step should be to create a policy and make it permanent. Using chron
I have heard is a temporary fix. The solution is copying that raw audit message into an empty file and running audit2allow
to create a policy:
1 2 3 4 5 6 7 8 9 10 11 |
|
This creates a file call NagiosPing.pp which contains the SELinux policy needed to make these errors go away. The only thing left to do is to install this policy:
1
|
|
If your setup was like mine, SELinux was actually preventing 3 different actions, needing 3 different policies. HA! That is easy now – just repeat the steps until Nagios is doing your bidding.
]]>Since I am a well rounded engineer, I thought I would be able to cover more of my skills in a skills-based resume, instead of a chronological list of my work experience. I went through and split my skills into areas (Coding, Data, Operations, Communication, etc) and listed a few points under each. At the end, I added my work experience, but only with a short paragraph of what I did there.
While I thought this style made a good resume, recruiters hate it, and so do people trying to hire.
After getting some advice, I wanted to share it with the rest of the world. Here are the things people want to see in software engineering/technical resume:
Resume writing is a complex task, and something a lot of people dread. I have definitely not covered everything, but I did want to make clear a skills resume is not a good idea. Also be aware that many recruiting companies load resumes through a computer before even looking at them. Try to have keywords a search engine could pick up on (like SEO for resumes).
Perhaps resumes written for machines is a good thing after all?
]]>Call T-Mobile on 1-877-453-1304 from another phone (not your G1). I got that number from their Contact page Its the standard voice prompt thing, so say English, enter your G1’s phone number, then when they ask what you want help with, say the magic word “Agent” Then they ask what you would like to talk about, and say “SIM Unlock Request” My experience with T-Mobile has been pretty good, within a minute I was talking to the real person. At this point, you verify your identity, and explain to them you want to unlock you phone because you are going overseas and want to use another SIM card. They will ask you for you phone’s IMEI number. This is a sacred number, so be careful who you give it to, as you can report the phone stolen, give them the IMEI number, and have the phone permanently disabled. You will find the number: on the side of the box your phone came in, on the G1 itself under Settings –> About phone –> Status, or by dialing *#06# Give them your email address, and within 14-days, you will get an email from T-Mobile with your unlock code. To unlock the phone, power it off, insert a non-t-mobile SIM card, and power it back on At the prompt, enter the unlock code from the email and you’re done! The reason for writing this post is I had no idea this could be done for free with any T-Mobile phone after 90-days has past. There is no need to pay $25 to any scam site, just do it legit, for free, and without issue.
]]>Both Django and GAE are being developed as I write this, so although these instructions are kind of recent, they are already out of date, or rely on you having knowledge of Django. Since there are a lot of others with no Python or Django experince wanting to learn, I thought I would make a tutorial that works as of today, but who knows a month from now or even tomorrow.
Note: This tutorial is written for Linux. Mac/Windows users will have to translate :–)
Django comes as a .tar.gz file, but we want a zip file to take advantage of the Zipimport library, so some conversion is needed.
cd Django-1.0.2-final
zip -r ~/django.zip django -x 'django/contrib/admin*'
The App Engine Helper or Django is an open source bootstrapper for getting Django started on App Engine. Downloading it from the website will currently give you quite an old version (r52) which will not work in this tutorial. Instead, use subversion to get the latest (r74).
1 2 3 4 |
|
As you would with any GAE application, edit the app.yaml file to refer to your application ID. The Helper also needs to know where your Google App Engine SDK is, since it is going to change how you start the development server, so create a link to it:
1 2 |
|
Django has a different way of running the development server. Instead of using dev_appserver.py to start the dev server, do the following:
1
|
|
If everything is running correctly, you should see something like:
1 2 3 4 |
|
However, if you are like me and saw an error like the following:
1
|
|
Then you will need to install the antlr3 python module. Luckily this is easy.
unzip antlr_python_runtime-3.1.zip
cd antlr_python_runtime-3.1
sudo python setup.py install
Lets see the site! When you go to http://localhost:8000/ you should see a page saying “It worked! Congratulations on your first Django-powered page.” Pretty (un)impressive huh?
Ok, now lets start doing something. Kill the server by pressing Ctrl-C. The Django tutorial is the next stop, which involves creating the Polls Django app. You can read through there to get a full understanding. For simplicity, I am only what I did to get it working.
1 2 3 |
|
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 6 7 8 9 10 11 |
|
(r'^polls/', 'polls.views.index'),
1 2 3 4 5 6 7 8 9 |
|
python manage.py runserver
You can go through the rest of the Django Tutorial to fill out the rest of the views. One last thing to know is the admin site. To view it, go to:
http://localhost:8000/_ah/admin
Notice that the Datastore Viewer is empty – it doesn’t even know about our Poll or Choice Models. Don’t panic. Go back to your terminal, and terminate the dev server. Now run:
1
|
|
This brings up a special python shell with the Django environment set up for you. Now you can run:
1 2 3 4 |
|
You have just created a new Poll. To end the shell, press Ctrl-D.
If you start the dev server again (or if you never stopped it in the first place), you should be able to go to:
to see you new Poll, and go to the admin site:
http://localhost:8000/_ah/admin/datastore
to see and create new Polls
Easy huh? :–)
]]>Aussie Vegemite Beer Bread
sudo
depends on DNS
WTF? Why does something like local privilege escalation, which does not leave the machine I am on, have anything to do with networking. Further, why the hell should a network configuration issue stop sudo from working. And even further still, why would Ubuntu (which as part of the normal install process does not set a root password) allow something as essential and necessary as sudo to be depended on a functioning network configuration? Amazingly though, a Google search showed this is a known issue. I really like the title of this bug: Manually Configuring Network Causes Massive, Unreversable, Failure. I believe this will be the first of many rants this blog will see, so readers (yes all 1 of you… thanks honey), check back soon. I’ll try and keep it G rated, but no guarantees :–)
]]>