Sunday, August 30, 2009

you have to be kidding me with Vista


I am not one to rant, and certainly not one to not appreciate the complications with building an operating system.

But, you have got to be kidding me with the Vista crap. I have been using it on a work laptop (was provided with it) since about a week and I have had my laptop actually CRASH (blue screen) after plugging-in and unplugging a usb mouse, my usb cell phone and after trying to wake up after hibernating.

Oh, and unless I enable "sending of null packets to keep session active" and enable TCP keepalives in my putty sessions, I get "Network error: Software caused connection abort" every few minutes.

People pay to buy this crap?

Okay, rant over.

Monday, August 24, 2009

final sign on the "road to 64-bit"


I wrote about switching to 64-bit Ubuntu a few days ago.

Then I added an update a few days later with more information.

The major conclusion there was that if you have > 3GB RAM, unless you are using a customized kernel or the 64-bit Ubuntu, you are not going to be able to use more than 3GB RAM.

Anyways, not that I needed more convincing, but tuxradar has a damn good write-up with performance numbers comparing Ubuntu 9.04 with 32-bit and 64-bit version:

Ubuntu 9.04 32-bit v/s 64-bit performance numbers

I think this conclusion summarizes it nicely -
"Putting aside the issue of Flash for a moment, moving from 32-bit to 64-bit is pretty much painless. In fact, you can't tell the difference without running uname -a in a terminal - all the programs you're used to are likely to run identically, and ultimately it's only a matter of time before x86-64 becomes the standard."

There is no turning back, I am convinced 64-bit is the way to go.

Monday, August 17, 2009

update on "the road to 64-bit"

A couple of days ago, I wrote about moving to 64-bit ; seems like it's going to be a simple decision for me.

A 32-bit operating system is limited to 4 GB of memory ( you can only reference 2 ^ 32 = 4GB ). However, 32-bit Ubuntu is limited to 3 GB. If you are considering between 32-bit and 64-bit Ubuntu and you have more than 3 GB of memory that should be a simple decision.

There is a workaround however if you are still not convinced 64-bit is the way to go - install server kernel as it has support for upto 4 GB. But think about the future when your 4GB will not be enough and you add more memory?

There is a bug (=4GB memory">Bug 74179) that covers releasing a 32 bit option for high memory ( > 3 GB) system.

kerneltrap.org has a good page on kernel and high memory with a good explanation of physical memory versus virtual memory.

memory leak with inheritance and destructors

I use inheritance and virtual functions fairly frequently, but I am wondering how common this is:

A little bit of context -

When using inheritance, destructors are called in the reverse order of inheritance. If a base class pointer points to a derived class object, and later we use the delete operator to delete the object, then the derived class destructor is not called. Refer to the code that follows:



#include
class base
{
public:
~base()
{

}
};

class derived : public base
{
public:
~derived()
{

}
};

void main()
{

base *ptr = new derived();
// some code
delete ptr;
}

The result is a memory leak.

Solution to avoid this -

Make the destructor virtual in the base class. A virtual destructor is one that is declared as virtual in the base class.



#include
class base
{
public:
virtual ~base()
{

}
};

class derived : public base
{
public:
~derived()
{

}

};

void main()
{

base *ptr = new derived();
// some code
delete ptr;
}


Does this mean when using virtual functions, it is always a good idea have the destructor virtual as well?

Sunday, August 16, 2009

what is ld-linux.so.2

Back in April, I wrote about a particular use of ld-linux.so.2. It's important to know however what the purpose of ld-linux.so.2 is in a little more detail.

ld-linux.so is the locater and loader of dynamic (shared libs) on your system. Most applications these days use shared libs (instead of statically built-in libs). When a program is loaded, Linux passes control to ld-linux.so.2 instead of normal entry point of the application. Now ld-linux.so.2 searches for and loads the unresolved libraries, and then it passes control to the application starting point.

To understand how a program loads, it's useful to understand ELF. The ELF (Executable and Linking Format) specification defines how an object file is composed and organized. With this information, the kernel and the binary loader (ld in our case) know how to load the file, where to look for the code, where to look the initializeddata, which shared library that needs to be loaded and so on.

ld-linux.so.2 is the runtime component for the linker (ld) which locates and loads into memory the dynamic libraries used by the applicaiton.

A little more about ELF (from wikipedia).

Each ELF file is made up of one ELF header, followed by file data. The file data can include:

  • Program header table, describing zero or more segments
  • Section header table, describing zero or more sections
  • Data referred to by entries in the program header table, or the section header table
The segments contain information that is necessary for runtime execution of the file, while sections contain important data for linking and relocation.

There are a few useful tools to read ELF files:
  • ldd prints the shared library dependencies.
  • readelf is a Unix binary utility that displays information about one or more ELF files.
  • objdump provides a wide range of information about ELF files and other object formats.
example:

rutul@ubuntu:~/test_progs$ gcc hello_world.c
rutul@ubuntu:~/test_progs$ ldd a.out
linux-gate.so.1 => (0xb7ef2000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7d80000)
/lib/ld-linux.so.2 (0xb7ef3000)
rutul@ubuntu:~/test_progs$
rutul@ubuntu:~/test_progs$ readelf -l a.out

Elf file type is EXEC (Executable file)
Entry point 0x8048310
There are 8 program headers, starting at offset 52

Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
PHDR 0x000034 0x08048034 0x08048034 0x00100 0x00100 R E 0x4
INTERP 0x000134 0x08048134 0x08048134 0x00013 0x00013 R 0x1
[Requesting program interpreter: /lib/ld-linux.so.2]
LOAD 0x000000 0x08048000 0x08048000 0x004c4 0x004c4 R E 0x1000
LOAD 0x000f0c 0x08049f0c 0x08049f0c 0x00108 0x00110 RW 0x1000
DYNAMIC 0x000f20 0x08049f20 0x08049f20 0x000d0 0x000d0 RW 0x4
NOTE 0x000148 0x08048148 0x08048148 0x00020 0x00020 R 0x4
GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RW 0x4
GNU_RELRO 0x000f0c 0x08049f0c 0x08049f0c 0x000f4 0x000f4 R 0x1

Section to Segment mapping:
Segment Sections...
00
01 .interp
02 .interp .note.ABI-tag .hash .gnu.hash .dynsym .dynstr .gnu.version .gn
u.version_r .rel.dyn .rel.plt .init .plt .text .fini .rodata .eh_frame
03 .ctors .dtors .jcr .dynamic .got .got.plt .data .bss
04 .dynamic
05 .note.ABI-tag
06
07 .ctors .dtors .jcr .dynamic .got
rutul@ubuntu:~/test_progs$

Friday, August 14, 2009

the road to 64-bit?


I am trying out a spanking new laptop with Intel T4300 (Dual Core) CPU which came pre-installed with a 64-bit Windows.

Since it's time to get Ubuntu setup on here, the question I am researching is if I should make the leap to 64-bit Ubuntu.

From what I have read so far on the discussion boards, there is not much in terms of what is missing on the 64-bit. The two questions:
a) Is there any benefit of running 64-bit?
b) Is "everything" supported on 64-bit?

a) Maybe I will not notice a performance improvement on applications like Firefox or Eclipse. If I am doing any image or video processing, maybe that shows improvement. I suppose I might even see improvement in mp3 decoding/encoding.
Though, the major benefit it seems is in using 64-bit is to expand the user-base. The key being, the more people use it, the more issues get reported and resolved.

b) If there are any missing packages on 64-bit, I should have an idea of those before I get set up. That is an exercise in itself.
One of the issues I read with versions before 8.04 was that the kernel did not have tickless support, which meant a drain on the battery. That's not going to be an issue since I am planning to install 9.04.
There were a few complains about flash and java plug-ins in Firefox.

I suppose the best option is to try it out and when you run into issues turn to the trusty Ubuntu forums for help.

Ok, ready drive...

Joe's Chinese setup tips in Ubuntu

It is always good to meet fellow Ubuntu enthusiasts who are passionate about sharing information, experiences and knowledge.

Joe has written a few useful posts on setting up your Ubuntu system for Chinese. It also has good information on using Chinese input methods and fonts in OpenOffice.

Be sure to check it out if interested: Pinyin Joe's Ubuntu Linux Chinese setup

Monday, August 10, 2009

virtual memory and The Thing King

Knowing how Virtual Memory works is very useful in programming. Jeff Berryman's explanation from 1972 of the basics is just too useful to get started.

The Thing King and the Paging Game

Note: This note is a formal non-working paper of the Project MAC Computer Systems Research Division. It should be reproduced and distributed wherever levity is lacking, and may be referenced at your own risk in other publications.

Rules
1.Each player gets several million things.
2.Things are kept in crates that hold 4096 things each. Things in the same crate are called crate-mates.
3.Crates are stored either in the workshop or the warehouses. The workshop is almost always too small to hold all the crates.
4.There is only one workshop but there may be several warehouses. Everybody shares them.
5.Each thing has its own thing number.
6.What you do with a thing is to zark it. Everybody takes turns zarking.
7.You can only zark your things, not anybody else’s.
8.Things can only be zarked when they are in the workshop.
9.Only the Thing King knows whether a thing is in the workshop or in a warehouse.
10.The longer a thing goes without being zarked, the grubbier it is said to become.
11.The way you get things is to ask the Thing King. He only gives out things by the crateful. This is to keep the royal overhead down.
12.The way you zark a thing is to give its thing number. If you give the number of a thing that happens to be in a workshop it gets zarked right away. If it is in a warehouse, the Thing King packs the crate containing your thing back into the workshop. If there is no room in the workshop, he first finds the grubbiest crate in the workshop, whether it be yours or somebody else’s, and packs it off with all its crate-mates to a warehouse. In its place he puts the crate containing your thing. Your thing then gets zarked and you never know that it wasn’t in the workshop all along.
13.Each player’s stock of things have the same numbers as everybody else’s. The Thing King always knows who owns what thing and whose turn it is, so you can’t ever accidentally zark somebody else’s thing even if it has the same thing number as one of yours.
Notes
1.Traditionally, the Thing King sits at a large, segmented table and is attended to by pages (the so-called “table pages”) whose job it is to help the king remember where all the things are and who they belong to.
2.One consequence of Rule 13 is that everybody’s thing numbers will be similar from game to game, regardless of the number of players.
3.The Thing King has a few things of his own, some of which move back and forth between workshop and warehouse just like anybody else’s, but some of which are just too heavy to move out of the workshop.
4.With the given set of rules, oft-zarked things tend to get kept mostly in the workshop while little-zarked things stay mostly in a warehouse. This is efficient stock control.

Thursday, August 6, 2009

const pointer and pointer to a const

Here is a good question; What's the different between:
a) const char* c;
b) char* const c;

Answer:
A pointer is itself a variable which holds a memory address of another variable - it can be used as a "handle" to the variable whose address it holds.

a) This is a changeable handle/pointer to a const variable.
b) This is a const handle/pointer to a changeable variable.

This example might explain better:
#include 

int main()
{
int for_a = 100;
int for_b = 200;
int for_test = 300;

const int* a = &for_a;
int* const b = &for_b;

a++; // allowed
a--;
// *a = for_test; // not allowed: "assignment of read-only location"

// b++; // not allowed: "increment of read-only variable"
*b = for_test; // allowed

printf("Value of *a = %d, value of *b = %d \n", *a, *b);

return 1;
}


Irrelevant, but then 'const char* const c;' would mean a non-changeable pointer to a non-changeable variable.

Wednesday, August 5, 2009

what does segmentation fault mean?


Probably the best explanation of program memory I have seen is this article:
Anatomy of a Program Memory


Once you understand the program memory, you get a better idea what causes a "segmentation fault:

In summary, the major reasons for a segmentation fault are:

a) Trying to read from or write to addresses in kernel space of your program memory.
b) Trying to feed to push to the stack more data that it can fit.
c) Trying to write to the text segment of the process memory (text segment is where the binary image of the process is stored).
d) Trying to access unallocated memory.
When you ask for memory from the OS, the kernel creates an entry for you in the VMA - Virtual Memory Area. If you try to access a address in the memory and not suitable VMA exists for it, your program is going to have the segmentation fault.

extern/static in function declaration

I have never really paid attention to the 'extern' keyword in a function declaration, so here is my understanding of this.

In C:

In the simplest form, the 'extern' keyword changes the linkage so that the resolving is deferred to the linker. It is assumed that the function is defined/available somewhere else.

Whereas a 'static' keyword when declaring a function, it makes the function local to that file.

I suppose if you don't use the extern/static keyword when declaring the function, it defaults to extern.

static c variables are of course different than non-static variables.

In C++:

'static' functions in C++ have a very different usage. A static member function of a class is generally called without having to instantiate an object of the class. This is same for the static member variables of the class as well.

The static functions do have their limitations:

  • A static member function can access only static member data, static member functions and data and functions outside the class.
  • A static member function cannot be declared virtual, whereas a non-static member functions can be declared as virtual.
  • A static member function cannot have access to the 'this' pointer of the class.

wubi and update to new release

I have been recommending Wubi to a lot of folks as a simple way to switch to using Ubuntu without having to worry about partitioning, etc. Wikipedia has a pretty good write-up on Wubi.

The problem, as I had expected, is that upgrading to a new release when available comes with a host of problems.

This discussion thread talk about the problem. It is fairly recent, so I don't think there is a solution for this yet. Does anybody have some different information?

Tuesday, August 4, 2009

which version?

I keep having to look up which version of the distribution I am running and like every thing Linux, every distribution has decided to use it's own way.

For Ubuntu:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=8.04
DISTRIB_CODENAME=hardy
DISTRIB_DESCRIPTION="Ubuntu 8.04.3 LTS"


For RedHat/Fedora:

#cat /etc/redhat-release


For SUSE:
$ cat /etc/SuSE-release
openSUSE 11.1 (i586)
VERSION = 11.1