Thursday, December 10, 2009

osalt.com



Simply one of the most useful websites I have come across in recent time. It is a resource to find the open source alternatives to commercial software.

Wednesday, December 9, 2009

about time - Chrome for Linux

Google announced Chrome for Linux (and Mac today).
In the true Google tradition, it's a beta version.

I have been waiting for this since Firefox 3.0. The frustrations with Firefox 3.0 and later have been well documented by many. Personally, I would really appreciate if my browser does not use more than 75% of memory on my system. This is regardless of if I have 512MB or 4 gigs.

The challenge for Chrome is to get the quality and number of extensions that Firefox has. With this version of Chrome, there is support for extensions and the market place claims to have over 300 extensions. How well this works on Linux needs to be seen.

The first thing I will need however is the ability to import and manage my bookmarks. I have been using Xmarks in Firefox and can't believe how I would manage without this tool. Hopefully there is something equivalent with Chrome.

On a relevant note, a while back I saw a commercial or a video where random people in New York were asked if they knew what Chrome was. As expected, it was interesting to see that most people did not know. In the bay area, we are blind-sighted by the access to new technology and information and we rarely think in terms of what real consumers experience, know and think. Google seemed to understand that and hopefully that understanding shows in Chrome's usability.

In terms of product development, Seth Godin had an interesting post a while back regarding what Firefox should be/could be doing to be a better product. Let's hope the world is relieved from the curse of Internet Explorer soon.

Sunday, November 15, 2009

cobbler

A headache for most network administrators or engineers setting up labs is the installation process of numerous servers and systems. With Fedora, PXE-installation and kickstart files takes some of the manual challenges out of the process, but it is still fairly task intensive.

Cobbler takes care of this issue and does it in style (Cobbler is an install server; batteries not included). It automates several of the tasks so that user doesn't end up switching between commands and applications when building new systems.

I personally liked a feature that I didn't think the system would have. One of the systems I was setting up did not have the BIOS updated to be able to do a PXE install. Cobbler has an option to create an installable CD/DVD and your system matches the rest that you have installed doing the automated PXE-install.

Fairly useful for organizations running a Linux servers network.

Saturday, October 24, 2009

tips for baby steps in kernel debugging

Even if you have been a programming in C for a while, getting into kernel debugging can be intimidating. If you are used to c-tags or using an IDE for your development, the task is even more challenging. However, a few tools listed below help you get started very quickly:

Obviously the first step is getting the kernel, and depending on what kernel you are running and/or what Linux distribution you have, this can be tricky. For Fedora systems, there are simple ways to get the kernel source.

LXR - Linux Cross Reference
This is a very useful resource if you want to get a quick idea about the flow of code, structures. LXR is a toolset that has the entire kernel source indexed. It's an Ajax interface and makes it very easy to browse source code.

printk()
This is the printf of the kernel. The syntax is similar to printf. The useful argument is the loglevels that can attach a level of importance to your messages. The definition of the loglevels are in include/linux/kernel.h

dump_stack()
Sometimes, tracking the code flow is easier if you can show the program stack. Most architectures have dump_stack() implemented. This can be a very useful weapon in a newbie kernel debugger's arsenal.

There are some good tutorials that do some hand-holding for writing kernel modules, but if you have to track an issue in the core kernel or just want to get a better understanding, the tools described above are very useful.

Go on, get your feet wet in kernel debugging.

Wednesday, October 14, 2009

handling the cross-compiling nightmare

If you have ever faced the challenge of running your code on different platforms with need to support different system libs (glibc/uClibc), you probably know it's not a simple task building the toolchains. Very simplistically, a toolchain is what makes up the tools that compile, assemble and link the code being developed.

For this case, crosstool-NG comes to your rescue. It's a versatile toolchain generator that is very simple to configure. You simply fill in the appropriate values with the adequate options in a configuration file. Then point your compiler option in the Makefiles to the right compiler.

There is a decent tutorial on how to use crosstool-NG here, but it has not been updated for the newer releases. The current release is version 1.5.2.

Saturday, September 12, 2009

linux in msft search


I read about this somewhere else, but I had to try it out myself...

The first two links in Microsoft Search when looking for Linux is "how to remove Linux". Elegant.

Sunday, August 30, 2009

you have to be kidding me with Vista


I am not one to rant, and certainly not one to not appreciate the complications with building an operating system.

But, you have got to be kidding me with the Vista crap. I have been using it on a work laptop (was provided with it) since about a week and I have had my laptop actually CRASH (blue screen) after plugging-in and unplugging a usb mouse, my usb cell phone and after trying to wake up after hibernating.

Oh, and unless I enable "sending of null packets to keep session active" and enable TCP keepalives in my putty sessions, I get "Network error: Software caused connection abort" every few minutes.

People pay to buy this crap?

Okay, rant over.

Monday, August 24, 2009

final sign on the "road to 64-bit"


I wrote about switching to 64-bit Ubuntu a few days ago.

Then I added an update a few days later with more information.

The major conclusion there was that if you have > 3GB RAM, unless you are using a customized kernel or the 64-bit Ubuntu, you are not going to be able to use more than 3GB RAM.

Anyways, not that I needed more convincing, but tuxradar has a damn good write-up with performance numbers comparing Ubuntu 9.04 with 32-bit and 64-bit version:

Ubuntu 9.04 32-bit v/s 64-bit performance numbers

I think this conclusion summarizes it nicely -
"Putting aside the issue of Flash for a moment, moving from 32-bit to 64-bit is pretty much painless. In fact, you can't tell the difference without running uname -a in a terminal - all the programs you're used to are likely to run identically, and ultimately it's only a matter of time before x86-64 becomes the standard."

There is no turning back, I am convinced 64-bit is the way to go.

Monday, August 17, 2009

update on "the road to 64-bit"

A couple of days ago, I wrote about moving to 64-bit ; seems like it's going to be a simple decision for me.

A 32-bit operating system is limited to 4 GB of memory ( you can only reference 2 ^ 32 = 4GB ). However, 32-bit Ubuntu is limited to 3 GB. If you are considering between 32-bit and 64-bit Ubuntu and you have more than 3 GB of memory that should be a simple decision.

There is a workaround however if you are still not convinced 64-bit is the way to go - install server kernel as it has support for upto 4 GB. But think about the future when your 4GB will not be enough and you add more memory?

There is a bug (=4GB memory">Bug 74179) that covers releasing a 32 bit option for high memory ( > 3 GB) system.

kerneltrap.org has a good page on kernel and high memory with a good explanation of physical memory versus virtual memory.

memory leak with inheritance and destructors

I use inheritance and virtual functions fairly frequently, but I am wondering how common this is:

A little bit of context -

When using inheritance, destructors are called in the reverse order of inheritance. If a base class pointer points to a derived class object, and later we use the delete operator to delete the object, then the derived class destructor is not called. Refer to the code that follows:



#include
class base
{
public:
~base()
{

}
};

class derived : public base
{
public:
~derived()
{

}
};

void main()
{

base *ptr = new derived();
// some code
delete ptr;
}

The result is a memory leak.

Solution to avoid this -

Make the destructor virtual in the base class. A virtual destructor is one that is declared as virtual in the base class.



#include
class base
{
public:
virtual ~base()
{

}
};

class derived : public base
{
public:
~derived()
{

}

};

void main()
{

base *ptr = new derived();
// some code
delete ptr;
}


Does this mean when using virtual functions, it is always a good idea have the destructor virtual as well?

Sunday, August 16, 2009

what is ld-linux.so.2

Back in April, I wrote about a particular use of ld-linux.so.2. It's important to know however what the purpose of ld-linux.so.2 is in a little more detail.

ld-linux.so is the locater and loader of dynamic (shared libs) on your system. Most applications these days use shared libs (instead of statically built-in libs). When a program is loaded, Linux passes control to ld-linux.so.2 instead of normal entry point of the application. Now ld-linux.so.2 searches for and loads the unresolved libraries, and then it passes control to the application starting point.

To understand how a program loads, it's useful to understand ELF. The ELF (Executable and Linking Format) specification defines how an object file is composed and organized. With this information, the kernel and the binary loader (ld in our case) know how to load the file, where to look for the code, where to look the initializeddata, which shared library that needs to be loaded and so on.

ld-linux.so.2 is the runtime component for the linker (ld) which locates and loads into memory the dynamic libraries used by the applicaiton.

A little more about ELF (from wikipedia).

Each ELF file is made up of one ELF header, followed by file data. The file data can include:

  • Program header table, describing zero or more segments
  • Section header table, describing zero or more sections
  • Data referred to by entries in the program header table, or the section header table
The segments contain information that is necessary for runtime execution of the file, while sections contain important data for linking and relocation.

There are a few useful tools to read ELF files:
  • ldd prints the shared library dependencies.
  • readelf is a Unix binary utility that displays information about one or more ELF files.
  • objdump provides a wide range of information about ELF files and other object formats.
example:

rutul@ubuntu:~/test_progs$ gcc hello_world.c
rutul@ubuntu:~/test_progs$ ldd a.out
linux-gate.so.1 => (0xb7ef2000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7d80000)
/lib/ld-linux.so.2 (0xb7ef3000)
rutul@ubuntu:~/test_progs$
rutul@ubuntu:~/test_progs$ readelf -l a.out

Elf file type is EXEC (Executable file)
Entry point 0x8048310
There are 8 program headers, starting at offset 52

Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
PHDR 0x000034 0x08048034 0x08048034 0x00100 0x00100 R E 0x4
INTERP 0x000134 0x08048134 0x08048134 0x00013 0x00013 R 0x1
[Requesting program interpreter: /lib/ld-linux.so.2]
LOAD 0x000000 0x08048000 0x08048000 0x004c4 0x004c4 R E 0x1000
LOAD 0x000f0c 0x08049f0c 0x08049f0c 0x00108 0x00110 RW 0x1000
DYNAMIC 0x000f20 0x08049f20 0x08049f20 0x000d0 0x000d0 RW 0x4
NOTE 0x000148 0x08048148 0x08048148 0x00020 0x00020 R 0x4
GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RW 0x4
GNU_RELRO 0x000f0c 0x08049f0c 0x08049f0c 0x000f4 0x000f4 R 0x1

Section to Segment mapping:
Segment Sections...
00
01 .interp
02 .interp .note.ABI-tag .hash .gnu.hash .dynsym .dynstr .gnu.version .gn
u.version_r .rel.dyn .rel.plt .init .plt .text .fini .rodata .eh_frame
03 .ctors .dtors .jcr .dynamic .got .got.plt .data .bss
04 .dynamic
05 .note.ABI-tag
06
07 .ctors .dtors .jcr .dynamic .got
rutul@ubuntu:~/test_progs$

Friday, August 14, 2009

the road to 64-bit?


I am trying out a spanking new laptop with Intel T4300 (Dual Core) CPU which came pre-installed with a 64-bit Windows.

Since it's time to get Ubuntu setup on here, the question I am researching is if I should make the leap to 64-bit Ubuntu.

From what I have read so far on the discussion boards, there is not much in terms of what is missing on the 64-bit. The two questions:
a) Is there any benefit of running 64-bit?
b) Is "everything" supported on 64-bit?

a) Maybe I will not notice a performance improvement on applications like Firefox or Eclipse. If I am doing any image or video processing, maybe that shows improvement. I suppose I might even see improvement in mp3 decoding/encoding.
Though, the major benefit it seems is in using 64-bit is to expand the user-base. The key being, the more people use it, the more issues get reported and resolved.

b) If there are any missing packages on 64-bit, I should have an idea of those before I get set up. That is an exercise in itself.
One of the issues I read with versions before 8.04 was that the kernel did not have tickless support, which meant a drain on the battery. That's not going to be an issue since I am planning to install 9.04.
There were a few complains about flash and java plug-ins in Firefox.

I suppose the best option is to try it out and when you run into issues turn to the trusty Ubuntu forums for help.

Ok, ready drive...

Joe's Chinese setup tips in Ubuntu

It is always good to meet fellow Ubuntu enthusiasts who are passionate about sharing information, experiences and knowledge.

Joe has written a few useful posts on setting up your Ubuntu system for Chinese. It also has good information on using Chinese input methods and fonts in OpenOffice.

Be sure to check it out if interested: Pinyin Joe's Ubuntu Linux Chinese setup

Monday, August 10, 2009

virtual memory and The Thing King

Knowing how Virtual Memory works is very useful in programming. Jeff Berryman's explanation from 1972 of the basics is just too useful to get started.

The Thing King and the Paging Game

Note: This note is a formal non-working paper of the Project MAC Computer Systems Research Division. It should be reproduced and distributed wherever levity is lacking, and may be referenced at your own risk in other publications.

Rules
1.Each player gets several million things.
2.Things are kept in crates that hold 4096 things each. Things in the same crate are called crate-mates.
3.Crates are stored either in the workshop or the warehouses. The workshop is almost always too small to hold all the crates.
4.There is only one workshop but there may be several warehouses. Everybody shares them.
5.Each thing has its own thing number.
6.What you do with a thing is to zark it. Everybody takes turns zarking.
7.You can only zark your things, not anybody else’s.
8.Things can only be zarked when they are in the workshop.
9.Only the Thing King knows whether a thing is in the workshop or in a warehouse.
10.The longer a thing goes without being zarked, the grubbier it is said to become.
11.The way you get things is to ask the Thing King. He only gives out things by the crateful. This is to keep the royal overhead down.
12.The way you zark a thing is to give its thing number. If you give the number of a thing that happens to be in a workshop it gets zarked right away. If it is in a warehouse, the Thing King packs the crate containing your thing back into the workshop. If there is no room in the workshop, he first finds the grubbiest crate in the workshop, whether it be yours or somebody else’s, and packs it off with all its crate-mates to a warehouse. In its place he puts the crate containing your thing. Your thing then gets zarked and you never know that it wasn’t in the workshop all along.
13.Each player’s stock of things have the same numbers as everybody else’s. The Thing King always knows who owns what thing and whose turn it is, so you can’t ever accidentally zark somebody else’s thing even if it has the same thing number as one of yours.
Notes
1.Traditionally, the Thing King sits at a large, segmented table and is attended to by pages (the so-called “table pages”) whose job it is to help the king remember where all the things are and who they belong to.
2.One consequence of Rule 13 is that everybody’s thing numbers will be similar from game to game, regardless of the number of players.
3.The Thing King has a few things of his own, some of which move back and forth between workshop and warehouse just like anybody else’s, but some of which are just too heavy to move out of the workshop.
4.With the given set of rules, oft-zarked things tend to get kept mostly in the workshop while little-zarked things stay mostly in a warehouse. This is efficient stock control.

Thursday, August 6, 2009

const pointer and pointer to a const

Here is a good question; What's the different between:
a) const char* c;
b) char* const c;

Answer:
A pointer is itself a variable which holds a memory address of another variable - it can be used as a "handle" to the variable whose address it holds.

a) This is a changeable handle/pointer to a const variable.
b) This is a const handle/pointer to a changeable variable.

This example might explain better:
#include 

int main()
{
int for_a = 100;
int for_b = 200;
int for_test = 300;

const int* a = &for_a;
int* const b = &for_b;

a++; // allowed
a--;
// *a = for_test; // not allowed: "assignment of read-only location"

// b++; // not allowed: "increment of read-only variable"
*b = for_test; // allowed

printf("Value of *a = %d, value of *b = %d \n", *a, *b);

return 1;
}


Irrelevant, but then 'const char* const c;' would mean a non-changeable pointer to a non-changeable variable.

Wednesday, August 5, 2009

what does segmentation fault mean?


Probably the best explanation of program memory I have seen is this article:
Anatomy of a Program Memory


Once you understand the program memory, you get a better idea what causes a "segmentation fault:

In summary, the major reasons for a segmentation fault are:

a) Trying to read from or write to addresses in kernel space of your program memory.
b) Trying to feed to push to the stack more data that it can fit.
c) Trying to write to the text segment of the process memory (text segment is where the binary image of the process is stored).
d) Trying to access unallocated memory.
When you ask for memory from the OS, the kernel creates an entry for you in the VMA - Virtual Memory Area. If you try to access a address in the memory and not suitable VMA exists for it, your program is going to have the segmentation fault.

extern/static in function declaration

I have never really paid attention to the 'extern' keyword in a function declaration, so here is my understanding of this.

In C:

In the simplest form, the 'extern' keyword changes the linkage so that the resolving is deferred to the linker. It is assumed that the function is defined/available somewhere else.

Whereas a 'static' keyword when declaring a function, it makes the function local to that file.

I suppose if you don't use the extern/static keyword when declaring the function, it defaults to extern.

static c variables are of course different than non-static variables.

In C++:

'static' functions in C++ have a very different usage. A static member function of a class is generally called without having to instantiate an object of the class. This is same for the static member variables of the class as well.

The static functions do have their limitations:

  • A static member function can access only static member data, static member functions and data and functions outside the class.
  • A static member function cannot be declared virtual, whereas a non-static member functions can be declared as virtual.
  • A static member function cannot have access to the 'this' pointer of the class.

wubi and update to new release

I have been recommending Wubi to a lot of folks as a simple way to switch to using Ubuntu without having to worry about partitioning, etc. Wikipedia has a pretty good write-up on Wubi.

The problem, as I had expected, is that upgrading to a new release when available comes with a host of problems.

This discussion thread talk about the problem. It is fairly recent, so I don't think there is a solution for this yet. Does anybody have some different information?

Tuesday, August 4, 2009

which version?

I keep having to look up which version of the distribution I am running and like every thing Linux, every distribution has decided to use it's own way.

For Ubuntu:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=8.04
DISTRIB_CODENAME=hardy
DISTRIB_DESCRIPTION="Ubuntu 8.04.3 LTS"


For RedHat/Fedora:

#cat /etc/redhat-release


For SUSE:
$ cat /etc/SuSE-release
openSUSE 11.1 (i586)
VERSION = 11.1

Monday, July 27, 2009

Sunday, July 26, 2009

msft drivers for Linux

This has been all over tech news last week. Microsoft contributing code to the kernel (GPL for the first time). The drivers help Linux run better on a Windows host.

http://news.cnet.com/8301-13860_3-10290818-56.html

I like the note at the bottom of the article...
"For those that want to hear Microsoft's take on the news, here's a video of Hanrahan discussing the move with Sam Ramji, the company's senior director of platform strategy. (Note: Silverlight is required.) "

Irony anyone?

Thursday, July 16, 2009

crowdsourcing to find answers



I have been using stackoverflow every so often to get answers and it's generally to questions others have asked.

Figured it's finally time to get a little more proactive in looking for answers by asking the questions myself.
http://stackoverflow.com/questions/1134931/reading-understanding-third-party-code

use gcc to help you deprecate methods/functions in c/c++

Every so often you need to deprecate/remove/rename classes and functions in a very large project. Rather than grep'ing your way through the large source tree and trial-&-error method of compiling the project, a good way to do this would be to use the __attribute__ macro.

You assign the "deprecated" attribute to the variables, functions, methods that you plan to phase out.

GCC will cause a warning at all the places it is called.

example:
class Foo {
public: Foo() __attribute__((deprecated)) {}
};

The thing to check would be that you are not using -Wno-deprecated-declarations when building.

Since I recently switched over to using NetBeans, I can do a "Find Usages" in the entire project, but very often your project only covers the part of the code base you personally have been working on.

Wednesday, July 15, 2009

MontaVista 1-second boot

It is impressive when you can boot within a second, even in an embedded environment. MontaVista is marketing the right feature in this video.



Obviously this is difficult to achieve on different architectures with storage devices and need to load drivers.

Sunday, July 5, 2009

openproj

I recently tried out OpenProj and it worked well for some minimal stuff. It seems to be feature rich and useful to open MSProject files as well.

Check it out at http://openproj.org/ or on sourceforge.

OpenProj by Serena Software is a desktop replacement of Microsoft Project. OpenProj has equivalent functionality, a familiar user interface and even opens existing MSProject files. OpenProj is interoperable with Project, with a Gantt Chart and PERT chart.

Saturday, July 4, 2009

wchar and unicode

Instead of using the standard ASCII characters, there is sometimes a need to support an alphabet that has more than 256 (2^8) characters. That is where you use Unicode where every character is 16 bits, thus giving you 65536 (2^16) characters.

In C for example, a wchar would be a Unicode character. The regular C string functions won't work on Unicode strings, so instead you use the C runtime library functions available for Unicode strings, prefixed with wcs.
Example: wcslen, wcscpy and such.

Of course the C compiler and the runtime library must have support for Unicode if you plan to use it.

You tell the C compiler that you plan to use Unicode through a macro:

_UNICODE // Tell C we're using Unicode, notice the _
#include
// Include Unicode support functions

Then define a Unicode string:

wchar_t string[] = L"Ubuntu rocks";

The L macro tell the compiler that this is a Unicode literal and not a char (unsigned short) variable.

Here is the wchar.h header file.


Wednesday, June 10, 2009

now reading


It's surprising how much fun it is getting free stuff!

I received my free copy of "Best Kept Secrets of Peer Code Reivew" and it has been fun reading so far. I would recommend getting yourself a free copy.

On a related note, I was recently discussing the topic of how computer science and specifically programming courses are taught in schools/colleges/universities. Too little emphasis is placed on reading code and more is placed on writing code. I think realistically it should be the other way around. In the years of software engineering I have been doing, I have probably spent more time reading or understanding or reusing or modifying or removing code than writing. And that is a skill I had to unfortunately pick up once I started working.

Tuesday, June 9, 2009

useful link for crossplatform c++ development

Found this link in one of the replies on a StackOverflow question.

http://predef.sourceforge.net/index.php

It has a lot of useful information and links if you need write code to run on multiple platforms.

Monday, June 8, 2009

mcelog

At work, I have to decode kernel panics on 64-bit systems occasionally.

`mcelog´ it seems could be a useful tool.

mcelog decodes machine check events (hardware errors) on x86-64 machines running a 64-bit Linux kernel. It should be run regularly as a cron job on any x86-64 Linux system (if it is not in the default packages on your x86-64 distribution, please complain to your distributor). It can also decode machine check panic messages from console logs.
I don't have a good example on it´s usage, but on one of my systems, I noticed this in /var/log/mcelog (the cron script is setup to write to /var/log/mcelog in Fedora distributions).


MCE 0
HARDWARE ERROR. This is *NOT* a software problem!
Please contact your hardware vendor
CPU 2 BANK 3 TSC c82ff2586f6b0
ADDR 219540
STATUS 942000470001010a MCGSTATUS 0

Tuesday, June 2, 2009

ss - clear TCP and UDP socket information

The ss command gives very detailed TCP and UDP sockets information and can be useful when breaking down the information that netstat provides.

Most useful options are:
-m, --memory
Show socket memory usage.
-p, --processes
Show process using socket.
-i, --info
Show internal TCP information.

Tuesday, May 26, 2009

neat link trick

I wonder why more web pages that have instructions on how to install/configure/setup stuff in Ubuntu don´t use this trick. Apparently, you can use Firefox to install applications.

For example this: apt:compizconfig-settings-manager

I might be wrong, but it seems all you need is to setup the hyper-link to ¨apt://¨. Neat.

c++filt

Every now and then when debugging some source code in c++, I come across mangled function names.

You can use c++filt to demangle the junk you see into recognizable user-level function names.

The theory is that the function names get mangled due to overloading. From the man page - ¨All C++ and Java function names are encoded into a low-level assembly label (this process is known as mangling). The c++filt program does the inverse mapping: it decodes (demangles) low-level names into user-level names so that the linker can keep the overloaded functions from clashing.¨

mouting a cifs server that you can access

ItÅ› troubling that I had to do a lot of searching do be able to do something as simple as this:
a) On my Ubuntu desktop, mount a cifs server where I have an account as a user ¨xyz¨.
b) Access the mount with my Ubuntu desktop user account ¨abc¨.

I suppose I don´t clearly understand the problem. The issue as I see it is that you can only mount using sudo. And once mounted, the local Ubuntu account cannot access it without having hte right permissions on the mount point.

I had to do the following:
a) The local mount point/directory needs to have the permissions for ¨abc¨.
b) ¨sudo mount -t cifs /cifs_server/xyz_account /mnt/local_mount_point -o rw,nosuid,nodev,noatime,uid=1000,umask=077¨

Friday, April 24, 2009

Let's get Jaunty



I am currently upgrading one of my desktops to Jaunty. It is a system I use at work and for the longest time it was running 8.04 which I was happy with.

Then I read about GNOME Do in 8.10, and was tempted to upgrade so that I stop using katapult. However, my experience with 8.10 wasn't very satisfactory. There were a couple of X Windows crashes which I did not have time to look into.

Anyways, I decided to give 9.04 a shot.

On a related note, I came across this article on how Jaunty Jackalope (9.04) is "as slick as Windows 7 and Mac OS X". Not sure if that is the experience that I am looking for.

Tuesday, April 14, 2009

Free Ubuntu eBooks

This link has a list free (as in speech) Ubuntu eBooks.

I have never read or even flipped through a Linux distribution book, so I am not sure how useful these are. I suppose one of these days I have to stop relying on google search to give me answers and actually pick up/print a book.

Monday, April 13, 2009

stacktrace


One of the quickest ways to debug code is to be able to tell how you got to a certain point in your code. So it is useful to be able to dump a backtrace/stacktrace at any point in your code.

I think you can find really good resources elsewhere to understand what a program stack is (for example this or this), so I won´t get into it. As a side note, those two links are some of the best ones I have found to understanding program memory and the difference between program heap and stack.

In C/C++:

This article gives really good information.
In summary, glibc provides backtrace() and backtrace_symbols() to do this quickly.

In JAVA:

Use the very useful Throwable class with your logging.

Example:
Throwable t = new Throwable();
Log.getStackTraceString(t));

ld-linux.so.2

When running your application, the shared libraries will be searched in wherever the LD_LIBRARY_PATH environment variable is pointing to.

However, sometimes when running applications that your receive with a specific library, you could use another way to indicate just that. ld-linux.so.2 is the Linux ELF program loader. It has a few options, but for our use, we need the --library-path option.

--list list all dependencies and how they are resolved
--verify verify that given object really is a dynamically linked
object we can handle
--library-path PATH use given PATH instead of content of the environment
variable LD_LIBRARY_PATH
--inhibit-rpath LIST ignore RUNPATH and RPATH information in object names
in LIST

Example:
# /lib/ld-linux.so.2 --library-path my/shared/libs my_app

Friday, March 6, 2009

write in C

Nerdy and funny.....he is right, 'Write in C'

Tuesday, March 3, 2009

wordpress - disable/enable comments on all posts

Since I finally got a chance to enable CAPTCHA on my personal blog which uses wordpress, I needed a quick to way to turn on comments that I had disabled for my posts.

This tutorial was good, but I do not have myPhpAdmin.

So, my steps were:

mysql -u -p
mysql> show tables;
+--------------------+
tables in ....
+--------------------+

....
mysql> UPDATE wp_posts p SET comment_status = 'open' ping_status = 'open' where comment_status = 'closed';

Of course, to reverse this:

mysql> UPDATE wp_posts p SET comment_status = 'closed' ping_status = 'closed' where comment_status = 'open';

worked like a charm.

--- Addition

Enabling comments was fine, but the SpamBots are now abusing the trackbacks option. So, I need to disable trackbacks too.

mysql> UPDATE wp_options SET option_value = 'closed' WHERE wp_options.option_id =20 AND wp_options.blog_id =0;

Tuesday, February 24, 2009

Ubuntu and Eclipse and svn

In the last two months, I have been spending some time developing phone applications in JAVA and using Eclipse as the IDE. Not having ever written in JAVA before, I cannot even imagine how long the whole development process would have taken was it not for the IDE.

So far in my development life, all my C/C++ code writing has required:
- a Linux system (i.e 'grep')
- ssh access to the build server (with right kernel, glibc, gcc version)
- vim + ctags for source code
- valgrind for finding memory leaks
- gdb for deugging

So the whole Eclipse+JAVA experience got me thinking if I could get any benefit from using a Eclipse for a large C++ project that I am involved with at work. Maybe it will at least help prevent carpal-tunnel from having to grep the life out of the project.

It seems the default Eclipse install for Ubuntu 8.04 is version 3.2 which is fairly old. So, it is best to download the newest version rather than use apt-get.

Next challenge was to get Subversion plugin.

Apparently, trying to add the svn plugin kept giving me an error
"
An error occured during provisioning.
Cannot connect to keystore.
JKS
"

Fortunately, some google searching pointed me to the fix. The plugin install needs Java 1.6 or newer. Also, the Java 1.6 has to be the default JRE which most likely it wont' be even after doing:

sudo apt-get install sun-java6-bin sun-java6-jdk

The Ubuntu Java Installation documentation explains how to select the default Java version on your system.

You could use
update-java-alternatives -l
or
manually pick the java-6-sun using sudo update-alternatives --config java

Oh well, I now finally have the C++ project under Eclipse pulled in from svn, I can at least browse the code.

Also, if anybody is looking for suggestions on what other developers perfer when doing C++ coding, check out this stackoverflow link.

Friday, February 20, 2009

testing software systems with a GUI

I am not a QA engineer or play one in my professional life, but I read/heard this somewhere recently and it makes sense.

If you are testing a software or a system with a GUI, don't test the software or the system functionality using the GUI. Develop alternate methods to do that. Just devote separate efforts to testing the GUI itself, since there is enough logic and code to build the GUI itself that need to be tested.

I believe a "Unix way of software development" is to build a command line interface as a first step to any future GUI enhancements. That gives you a perfect seperation between functionality and GUI and gives you a good avenue to do testing.

Also part of that same note was this nice line where the author said, the Unix-way of development does has its con in that it does not necessarily lead to a great app, because the GUI almost follows the comamnd line. The solution; if you disabled the command line access for all Unix programmers in the world, you will have a substantial imporvement in the GUI quality of those applications.

Thursday, February 19, 2009

open source venture funding

Since I am a sucker for all things open source, I found a recent post by Mark Cuban on his blog challenging anybody with an idea to submit a business plan and be funded instantly very interesting.

As he puts it.....
You must post your business plan here on my blog where I expect other people can and will comment on it. I also expect that other people will steal the idea and use it elsewhere. That is the idea. Call this an open source funding environment.

Pretty cool challenge. I expect a lot of phone applications that will meet the 13 criterias listed.

grub error 17


I finally managed to free up an old system at work and put it in the lab so that others on my team can start getting comfortable using a Linux desktop.

I installed 8.04 since I am still a little wary of 8.10.

A very simple install with just a single root partition and a swap partition. Things seemed fine, for a few days after which the system just locked up. After forcing a power cycle, grub complained saying "Error 17".
A quick google search will tell you that this has something to do with the hard-disk. Probably a failed disk or at the very least some errors.

Fortunately it was just the case of errors on the disk which I was able to fix by booting the live-cd and doing the following:

sudo -s
fdisk -l # to figure out the root partition.

ext3fsck -y /dev/sda2 # to find and fix disk errors

fsck found and fixed a few errors. The subsequent reboot was fine.

Searching for the error on google, showed a good discussion on the Ubuntu forums about another case when you might run into this error after a fresh install. Probably the case if you have multiple disks and/or more partitions. Check it out:


Monday, February 16, 2009

wubi and booting error

I have been running 8.04 at work using wubi and it has been working like a charm since the last 6 months.

However, I just ran into an issue where after a reboot, instead of booting, it dropped me into a busybox prompt. The problem was that grub could not find the initrd, probably due to a few _corrupted_ disk sectors where wubi had my Ubuntu image installed.

Simple fix, boot into Windows (hopefully that still works) and run 'chkdsk' on the drive.

Oh well, the price you pay for not giving yourself a dedicated partition.

Monday, February 9, 2009

formating a drive for fat32

To quickly format a drive to fat32 (for Windows) use mkfs.msdos:

Example: sudo mkfs.msdos -I -F 32 /dev/sdc

Other formatting options:

mkfs.bfs
mkfs.ext2
mkfs.minix
mkfs.reiserfs
mkfs.cramfs
mkfs.ext3
mkfs.msdos
mkfs.vfat

Monday, February 2, 2009

Ubuntu Pocket Guide


One of the strongest reasons for Ubuntu's rise in popularity as the Linux desktop distribution of choice is that you can pretty much search for answers to any of your questions. In case the entire Internet is not enough to give you the information you are looking for, feel free to download this Ubuntu Pocket Book and Reference Guide.

Free (as in open source) download!

Thursday, January 8, 2009

Windows 7 eerily similar to Fedora

versus


Seems like this will be the year for a Windows 7 release.

I haven't used Vista at all, and the last time I booted into XP was a while ago. But, the first thing that strikes me when I look at the screen shot is how similar it is to the Fedora look. Then again, it has been a while since I used Fedora either.

If at all Windows was trying to pick up design inspirations, they should have just google'd 'Ubuntu Screenshots'

Wednesday, January 7, 2009

static variables in a C++ class

Certainly mature methods of software coding recommend that you avoid global variables. So sometimes you end up using static data members within C++.

I recently learned this about using static variables in C++ classes. You need to define the static data member outside the class declaration. Failure to do this results in a linker error, the reason being because without the definition the compiler doesn't know which translation unit (hence object file) the member is supposed to go.

Sample code:
foo.h

class bar {
public:
int getBar1();
private:
static int bar1;
};

foo.cpp
#include "foo.h"

int bar::bar1;
.....
....

One more reason why I don't consider myself a C++ programmer.

Flying Linux

As I am getting ready for a flight this weekend on Virgin America, I remembered something I read a few months back.

The in-flight entertainment system on Virgin America, RED runs on Fedora/Red Hat Linux. Very cool.

Here is a good interview with Charles Ogilvie, the Director of In-flight Entertainment and designer of RED. I am certainly looking forward to checking it out. At least there are still one or two good reasons to fly.