Friday, February 29, 2008

wordpress upgrade - four simple steps

Recently I was forced to upgrade my wordpress and in the past having not had the patience to read the documentation, my attempts were not very successful.

So, it was good to have found a good source which described the process in four simple steps using the shell. It worked.

Step 1: Backup the existing database.
I would not want to lose my work, specially the blog, so against my natural tendency, I did backup.

[me@mywebserver]# mysqldump -u lazyinvestor -p > backup_`data +%m-%d-%y`

Step 2: Get the latest wordpress .zip and unzip

This would generally work, but you should be smart enough to know there are other ways.

[me@mywebserver]# wget
[me@mywebserver]# unzip

Step 3: Overwrite all the new files onto your old ones

[me@mywebserver]# cd [to_whereever_your_wordpress_files_are]
[me@mywebserver]# cp -avr []

Step 4: Open http://yourblog_url/wp-admin/upgrade.php in your favorite browser.


brainstorm Ubuntu

There is probably is a clever marketing-theory terminology to refer to this, but some of the best feedback for any kind of product comes from people who use it.....a lot.

Check out

The best part is you get to submit your ideas and also vote on what is of more importance to you.

Power management and suspend/hibernate enhancements obviously are high up on the list. That is all good for making Ubuntu better for laptops. Even though I am very a happy with how things run on my laptop, some things like efficient power-management enhancements and faster booting will be great additions.

Tuesday, February 26, 2008

TCP optimizations and sysctl

Time and again, I have needed to tweak a server running a variety of TCP/IP application(s) so that it shows some improvement with some simple steps. I am not a System Administrator, so the quickest trick for me is to tweak simple TCP characteristics:

a) Increase TCP window size
b) Enable TCP SYN cookies. This prevents SYN floods on incoming connections
c) Increase TCP send and receive buffers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

All these are edits to /etc/sysctl.conf which is a configuration file to configure/edit kernel parameters at runtime.

#sysctl -p /etc/sysctl.conf
to enable your changes, and
#sysctl -a
to see what else has been configured.

Monday, February 25, 2008

hup hup and away

I have used other ways to do this, and never really thought about this until somebody recently asked me how to run a command so that it continues to run even after you exit the terminal/logout.

Since I have been working with systems software development, the first thing that comes to my mind would be to make your executable a service and use the 'service [program] start/stop/restart' commands to control it.
The other option is to use a tool called 'screen' which I did try it for a few months, but never got comfortable with the scrolling up/down.

The simpler (and probably more obvious) way is to execute the program such that it is immune to hangups i.e. the 'nohup' command.

nohup ./keep_this_running

This will dump the output to a file called nohup.out.

To get more control over the output, try this:
nohup ./keep_this_running 1>output.log 2>error.log

be nice

In my previous post, I gave an example command to create an (uncompressed) ISO image from a CD/DVD. The actual command was preceded by 'nice -n +19'.

Briefly, a little bit about this 'nice' command since even though I don't use it very often, it can be a very useful tool, specially on laptops like mine where I am anyways struggling for processor resources.

So, 'nice' is a command for POSIX-complaint OS'es using which you can control the priority of a process with -20 being the highest value and +19 being the lowest priority. The default value is 0, which is inherited from its parent (shell being the most likely parent).

Obviously, a lot depends on how the scheduler is designed, where the 'nice' value is probably just a part of a complicated set of parameters used to determine which task should be run next. But, it does give you a user-level control, specially when doing non-priority tasks like creating ISO copies, etc.

Wednesday, February 20, 2008

and no DVDShrink

In the previous post, I talked about using DVDShrink to save .iso from DVDs onto your laptop so that you can play movies in your favorite DVD player just like you would with regular DVDs.

What I did not mention was that, IF unlike me you are not looking to _shrink_ the original DVD and are fine with your ISO using (almost for all feature DVDs) 8GB of your precious disk space, there is an easier way:
rutul@rutul-laptop: nice -n +19 mkisofs -dvd-video -V BILL_MAHER_IM_SWISS -o /media/sda1/BILL_MAHER_IM_SWISS.iso /media/cdrom > /dev/null 2> /var/log/mkisofserrors.log

But, if you ever (don't do it, it's probably illegal) want to make a DVD out of that .iso, you will need a dual-layer DVD burner and an appropriate disk.


I wanted to write a little about Automatix before I got into details about how useful the packages installed using it have been. But life doesn't pan out as one plans.....I have been realizing that lately. Anyways, more about Automatix some other time. Just know that if you use Ubuntu as your primary OS, you will appreciate having that package manager along with the obviously awesome Synaptic Package Manager.

I sometimes don't get enough time to watch a movie that I have picked up from the library, and they don't allow renewals. So, I tend to keep a soft copy of the DVD onto my laptop in ISO format, so that I can play it later. This is great if you travel a lot and like to watch movies in flight.

Step 1: Get DVDShrink (trust me, just use Automatix)

Step 2: Configure it so that you "Create ISO file only" (very intuitive to select this).
Specify where you want to create the .iso (directory).
Specify you want to remove temporary files when done.

Step 3: Once you have your .iso at the appropriate location, mount the .iso
sudo mkdir -p /media/movie
sudo mount -o loop BILL_MAHER_IM_SWISS.iso /media/movie/

Step 4: Play the .iso
totem /media/movie
xine /media/movie (trust me, get Automatix)

Easy enough and convenient enough.

Tuesday, February 19, 2008

more command line tips

I wrote a few days ago about a reference document that organizes the commands in well defined categories for quick look-ups. That is useful, but this list is probably better for users like me who know the basic command, but are too lazy and impatient with the man pages since the list gives the commands in form of examples, commonly used.

Oh well, as with the previous document, not sure if I will ever use it, but it never hurts to keep a bookmark.

Monday, February 18, 2008

software (un)development

My theory is that there are two kinds of software developers (broadly categorized); There are those that love the science, the art of programming and view each challenge as something to be designed and developed as a precious diamond. And the other kind who get things to work, get features completed and manage to write code that mostly works. I tend to believe that I belong to the latter category. Actually there is a third kind, but they should be fired anyways, so not relevant. But I digress.....

I recently read an interesting article by Joel Spolsky in Inc magazine about five reasons/ways a software project fails. Now, I might not be the best judge of what seem to be very obvious bad software development management practices, but I have observed these in my not very long development career, so they make sense to me.

It is really weird that managers, who have spent years in the field, and even those in successful companies and with successful products, are not immune to these bad practices. All of them should pick up a magazine or two, because asking them to read a book might be a bit too much.

Briefly, here are the reasons:
1. mediocre team of developers
Its the manager's responsibility to pick the right people.
2. set weekly milestones
Weekly? I have worked with someone who require a daily update!
3. negotiate the deadline
As in wishful thinking because the manager lacks the skills to do effective planning when starting off and as things progress.
4. divide tasks equitably
This gets even more interesting when the manager doesn't have a clue about the details of and tries to "balance" the project.
5. Work till midnight
If it takes X 1 hour to write a 5 line macro, can he write 10 macros in 10 hours?

I am sure I am not the only engineer who agrees with the list, specially due to having experienced this in their development-lifetime.

Thursday, February 14, 2008


The desktop effects, with Feisty Fawn and then much more advanced in Gusty Gibbon are way too much fun to play around with. I have been meaning to write about my experiences, but thats for another post.

If you are every looking to get excited over funky desktop effects, make sure to check out the default effects and eventually, if your hardware supports it, beryl.

Wednesday, February 13, 2008


Now this is one of those things where I wonder either I have to have way to much time to sit around and try this out, or I am just a nerd. But don't judge, if you grew up in the days of Windows 3.1 and Windows 95 playing shareware PC games (from those much coveted CDs included in PC Magazines), you will understand the joy of it.

DOSEMU is an emulation program for running a _lot_ of DOS executables in Linux. I believe WINE is great for Windows programs, but this is specifically for DOS, as in Dangerous Dave anyone?

And not surprisingly, for my Ubuntu laptop, all I had to do was:
sudo apt-get install dosemu

Next step, finding my most favorite games, Asteroids, Dangerous Dave, Need for Speed and Wolf. A Google search and some minutes on DOS Games Archives later, I was up and running Need for Speed!

Tuesday, February 12, 2008

et tu, Linux?

Well, how is that for a dramatic title? A little Shakespeare reference!

So, a few days ago, a _serious_ bug in the 2. 6 kernel (from 2.6.17 to was discovered. Very well documented on what exactly it is and how to reproduce it locally (if you are one of those) in this Slashdot article. The issue is that a user can gain access as 'root' if the exploit is execute on your system, which means that now that user has complete access to your system.

On my personal laptop, it obviously doesn't matter. The issue is when you are running multiuser access servers, as in a University network. I am not a systems administrator, but if I was, I would be worried to say the list.

Well, it didn't take long to find a patch.

For us smart Ubuntu users, don't worry about things if you don't know what a patch means. Because we have the strength of Synaptic Update Manager with us. Just simply click on the update notification (which you should have received sometime today) and relax. If you are really curious, this is the issue that was patch'ed. It will update all the necessary headers, kernel image and source files.

If you are one of the unfortunate Fedora or RedHat users and running the affected 2.6 kernel, applying the patch to your kernel source and recompiling the kernel is do-able, but not without raising your heartbeat a few notches. This might work for you:

1. Get the patch from here. It also has a lot of information of how to apply it, etc.

2. cd to the kernel source (hopefully you have it installed). Generally should be /usr/src/linux-2.6.x.x. If not installed, try this:


3. Apply the patch to the kernel source.
patch <

4. Compile and install. This can be little tricky if your kernel configuration (.config) is not created for your system. This would be the case if you just downloaded the source.

If you have the .config for your system, just follow these steps:
$ make

$ make modules

c) $
su -
# make modules_install

d) $
make install

This should have created the following in your /boot:
* config-2.6.x.x
* vmlinuz-2.6.x.x

e) Create initrd image:
# cd /boot
# mkinitrd -o initrd.img-2.6.23 2.6.23

f) Update /etc/grub.conf (as in I am not a fan of LILO)

g) Say a prayer and reboot

Friday, February 8, 2008


As I spend more time on my laptop these days, I tend to play around with the more _cooler_ tools that make the whole Linux-as-a-desktop-experience a little more interesting. Very recently Linux Trovalds expressed his thoughts on Linux desktop which is an interesting read.

The tool I enjoyed mucking around with recently is called conky - a light weight system monitor that embeds itself into the desktop and you can configure it to feel and look exactly how you want it to.

Of course, the Ubuntu experience is worth spending a few minutes configuring it exactly the way you want it to with all the information you can get from this forum. As you try and set it up to give the exact bits of information, this list of conky variables is very useful, however I am having trouble setting up the following two things:

- battery display only works when the laptop is charging.
- the wireless-link bar doesn't display anything.

My .conkyrc

Tuesday, February 5, 2008

glamorous != software development

As almost every software engineer, I wish our jobs can gain acceptance in society as being a glamorous profession one day! That way I can get rid of my fake visiting cards in which I am a model scout and live in LA but travel to Europe once a week, but which gets tiring since all that the models ever want to do is go out drinking........well I digress, but you get the point!

Anyways, if you like reading nerdy-stuff, Slashdot had a link to this vent written by a software engineer/developer with a pretty unique take on making it sound way more interesting than it really is. But hey, if that changes the opinion about "our kind' for the cute red head at the coffee shop next door, I am not complaining.

Monday, February 4, 2008

mobile penguins

Ever since Google's Android development platform was released, I keep noticing the Linux mobile development platforms getting a lot more news. Either that, or its like when you are decide to buy a car and then suddenly you start noticing only that model on the road everywhere.

LiMo, recently announced it will be coming out with a standard specification for creating shared and open mobile applications/platforms. There is another group, Linux Phone Standards, doing pretty much the same thing.

Lot more details in this article.

The most interesting however was reading about how Ubuntu is making a strong impact on stacking claim as being the best distribution for being the best platform for mobile/handheld devices. Gusty Gibbion (which I have yet to try out) includes support for Ubuntu Mobile and Embedded (UME) project that "aims to derive an operating system for mobile internet devices using Ubuntu as a base".

There is this nice tutorial on quickly getting yourself acquainted with the embedded development framework and tools to get start.

I want to get myself involved with one of these projects, but I am still thinking of what application do I really want built into my phone? I am old fashioned I guess since besides wanting to make and receive phone calls reliably, I really don't expect much out of my phone.

Sunday, February 3, 2008

slashdot - data center

Going through Slashdot used to be part of a daily routine some time ago. Anyways, I had gone through this interesting article on Slashdot's data center setup back in October, 2007. It was certainly an interesting read, since it is good to know what it takes to setup and administer a pretty heavy website.

The interesting part was reading about what software they would be running on the Web Servers. As expected, all of their 16 servers run Linux, but RedHat 9 (really???...well, I guess this was setup back in 1999). I guess the distribution doesn't really matter as long as you can manage and upgrade whenever necessary.
In addition, they have 7 databases, running CentOS.

I am not an administrator, but the second part of the article did get into details about the Apache setup on the 16 servers which could be interesting to read.

Friday, February 1, 2008

pdsh - rsh to multiple remote systems

A useful tool, specially when you are working with multiple systems/servers is pdsh. A variant of rsh, where it runs the command on multiple remote systems. And it does this smart where it fans out sending the command (multiple threads) so that we are not waiting for timeouts on some connections if that happens.

The man page is fairly detailed, and the impressive feature is that the command accepts host lists in general format (Its not really regular expressions, but does the job effectively. ) I can do the following:

[rutul@rutul-laptop]#pdsh -w 10.35.74.[66-72] ls anaconda-ks.cfg bin ssh: connect to host port 22: Connection refused ssh: connect to host port 22: Connection refused anaconda-ks.cfg Desktop

Pressing ctrl-c once shows status of all the connections:

pdsh@rutul-laptop: interrupt (one more within 1 sec to abort)
pdsh@rutul-laptop: (^Z within 1 sec to cancel pending threads)
pdsh@rutul-laptop: command in progress
pdsh@rutul-laptop: command in progress
pdsh@rutul-laptop: command in progress

and another ctrl-c within one second aborts:

sending SIGTERM to ssh pid 8161
sending SIGTERM to ssh pid 8162
sending SIGTERM to ssh pid 8164
pdsh@rutul-laptop: interrupt, aborting.

Very useful if you spend some time working on clusters and are clever at scripting.

heavyreading....for later

I haven't yet setup the LAMP server that I mentioned in a previous post, but once I am up and running, I expect to do some tuning for performance improvements.

IBM's DeveloperWorks often has some very up-to-date articles on Linux and open source and came across this series of 3 articles that seem a good starting point for the experiments I could do to optimize my setup.

If I get around to it, will try and document my experiences.

Here are the individual links:
Part 1
Part 2
Part 3

in case of emergency

On one of the embedded devices that I write software for, I am running the 2.6.21 kernel patched with preempt-rt patch. I won't get into discussing what my opinion of real-time Linux is right now, so save that for a later post.

But, since the device is going to be deployed in a 24-7 environment, I figured it might help to reboot the system in case of a kernel panic rather than having it sit around since the privilege of console access is not affordable.

The solution, configure /etc/sysconfig.conf :

kernel.panic = N # to reboot immediately
kernel.panic = 5 # to reboot after a 5 second delay

You will notice the change in /proc/sys/kernel/panic, which means, you can change this at runtime by doing:

echo 5 > /proc/sys/kernel/panic