Category: World

  • Blogging using Livewriter.

    Indeed after doing a search in google i am able to find out a blogging tool for windows thats live write. Well, it is a product of microsoft and it is good. As i haven’t tested it a lot. but i am finding it usual till now. After downloading it from here and some teawking in my proxy settings i am able to configure it to work on my office’s XP machine.

  • Converting delimited text to Excel

    Description Google Results

    Non-technical people need to be able to work with data. They usually end up reaching for Excel or Access because we live in a malevolent Universe.

    Fortunately for the Perl kids there are a couple excellent modules already done for you by our friends John McNamara (Spreadsheet::WriteExcel) and Kawai Takanori (Spreadsheet::ParseExcel). Here is an example of how you can turn Excel into delimited plain text: converting Excel to text.

    Below is a very useful and fairly generic subroutine that can take all kinds of delimited files and turn them into straightforward Excel files.

    Code
    sub text_to_excel {
    # %args should look something like...
    # ( delimiter => "\t",
    # recordsep => "\n",
    # file => "/path/to/file.txt"
    # name => "Sheet Title" )
    # the only required args are delimiter and file

    # we require instead of use to save on if we never end up using it in
    # a larger script or CGI, but use statements at the top of the script
    # are clearer for other programmers to follow.
    require Spreadsheet::WriteExcel;
    require IO::Scalar;

    my %args = @_;
    my ( $delimiter, $recordsep, $file, $name ) =
    @args{qw( delimiter recordsep file name )};

    $delimiter and $file or
    die "Must provide at least delimiter and file as args to" .
    "delimited_text_to_excel().";

    -e $file or
    die "There is no file: $file\n";

    open F, "< ", $file or croak("Can't open $file: $!");
    $/ = $recordsep || "\n";
    my @data = < F>;
    close F;
    $/ = "\n";

    my $xls_str;
    tie *XLS, 'IO::Scalar', \$xls_str;

    my $workbook = Spreadsheet::WriteExcel->new(\*XLS) ;

    my $worksheet = $workbook->addworksheet($name||'Page 1');

    for ( my $row = 0; $row < @data; $row++ ) {

    chomp( my @line = split /$delimiter/, $data[$row] );

    for ( my $col = 0; $col < @line; $col++ ) {
    $worksheet->write_string($row, $col, $line[$col] || "");
    }
    }
    $workbook->close();
    return $xls_str;
    }
    Usage
    use MIME::Lite;  # we want to mail our excel sheet

    my $file = '/data/profit_forcast';
    my $name = '2006 Profit Forcast';
    my $xls_data = text_to_excel( file => $file,
    delimiter => "\t",
    name => $name );

    # we've done all the work. $xls_data IS the excel file in a raw
    # format. we could do anything with it now, including writing it to a
    # file, but let's send it via email.

    my $msg = MIME::Lite->new(From => 'traitor@sedition.com',
    To => 'tuna@fish.net',
    Cc => 'traitor@sedition.com',
    Subject => $name,
    Type => 'multipart/mixed')
    or die "PROBLEM opening MIME object: $!";

    $msg->attach(Type => 'application/vnd.ms-excel',
    Disposition => 'attachment',
    Data => $xls_data,
    Filename => $name . '.xls')
    or die "PROBLEM attaching Excel file: $!";

    $msg->send() or die "PROBLEM sending MIME mail: $!";

    print "Sent $name!\n";

    Discussion

    Anyone who’s dealt with delimited files before knows that this approach is missing a way to balance delimiters. Eg: If your field delimiter is a tab and your record delimiter is a newline and one of the text fields has a tab or a return character in it, it will wreck the results.

    To work with this, I often use the NULL character (””) as a field delimiter and a double (””) as a record delimiter. It will never appear in regular files so you don’t have to resort to Text::Balanced or something to ensure your data integrity.

    If you will ever have empty fields that cause the field delimiter to double up, you’ll have to get crafty and do something like “”.‘_RS_’.”” for the record separator.

    $xls_data = text_to_excel( file      => '/path/to/file.txt',
    delimiter => "",
    recordsep => "",
    name => 'NULL delimited file' );
  • Commands.txt

    solaris commands

    wipro-bby
    wiprobby

    1. /usr/bin/uname – display current OS name, versin, Architecture

    2. /usr/bin/uptime – Display how long the system has been up

    3. /usr/bin/prtconf – Displays out detailed hardware info.

    4. /usr/bin/prstat – Display active process statistics with the top process taking the most resource.

    5. /usr/platform/sun4u/sbin/prtdiag – Displays very detailes hardware info such as CPU speed, CPU chache and on what slots memory chips is installed.

    6. /usr/bin/showrev – displays machine and software version info.

    7. /usr/bin/w – display info on currently logged on users.

    8. Adding users –
    #useradd -d /export/home/username -m -s /bin/ksh
    the -m option tells the useradd command to automatically create the home directory.
    NOte: do not store user directory in /home as this directory is used by solaris automounter. the automountwer gives the user to login to many machines and automatically ahve their home directories mounted on that machines /home area.

    9. to delete users. – /usr/bin/userdel
    for eg. userdel -r – will delete the users home directory as well.

    10. psrinfo -v – processor info.

    11. netstat -rn – show the routing table.

    12. ifconfig -a – show the network iface info.

    13. explorer output
    /opt/SUNWexplo/bin/explorer – it is an executable file used to generate the explorer output
    /opt/SUNWexplo/etc/ – directory contains the explorer tar files.

    14. passwd -sa — for checking all system users password.

    Network Configuration in Solaris.
    1. to set the machine’s name – /etc/nodename

    2. using DNS edit: /etc/nsswitch.conf – look for line that starts with “hosts:”
    add “dns” to the end of the line.
    you can add the “dns” entry to the very beginning of the line, which changes the order in which solaris will do the name lookups. for eg. if you have “nis” before “dns” it will check in nis database first and try to resolve it from there and if you have files before dns it will look in the /etc/hosts files before it look in dns.

    3. adding entries in /etc/resolv.conf
    file: /etc/resolv.conf
    search domainname.com
    domain domainname.com
    nameserver ns1
    nameserver ns2

    4. adding machines info in /etc/hosts file.
    file:/etc/hosts
    ipaddr hostame

    5. edit the following files.
    /etc/net/ticlts/hosts
    /etc/net/ticolts/hosts
    /etc/net/ticotsnord/hosts

    6. editing the interface name files.
    sun systems can have multiple network cards, and each of those cards answer to a different hostname you may also have to edit a file to assign the hostname to the main network card.. you may want a single server to respond to many hostnames. the main network interface is mainly “hme0”.
    to edit interface: /etc/hostname-interface

    7. to edit netmask.
    /etc/inet/netmasks
    —————————————————————————————————————————-
    ###Exporting Display
    ##logging on server A using VNC.
    1.ssh server B
    2.xhost server B
    3.export display server A:1.0

    ##for automatic color schemes.
    ls –color=auto

    ##for time styling
    ls –time-style=+%d-%m-%y\&H%M

    ##adding alias in .profile
    alias variable=”alias name”
    here “alias name” refers to any command which can be used in conjugtion.

    ##to show all the hidden files in one directory.
    ls -d .*

    ##to remove empty lines using sed.
    sed `/^$/d’

    ##password aging script in linux if chage is not working.
    chage -l usernaem – most appropriate condition.
    else
    login as root.
    grep the users in /etc/passwd file using:
    #cut -d: -f1 /etc/passwd
    #while read line do
    #chage -l $users | grep “password expires” and line
    #disply $users- your password will expire on

    ###Configuring Network.
    ##adding net up on command line.
    #ifconfig eth0 netmask brodcast up

    ##adding the default gateway.
    #route add default gw

    ##add the nameserver entries.
    file: /etc/resolv.conf

    #nmblookup -A -d1
    #smblookup -LBC8 -I -U knopix % -w workcener name -d3

    AIX commands.
    #lscfg -vp | grep -p Cabinet — to check the cabinet no. on IBM/AIX

    #lsdev -Cc Tape — to list the tape devices.

    #rmdev -dl /dev/rmt0 — to delete rmt0 device.

    #cfgmgr -v — reread the system hardware components and if iut finds any new thing. it will configure it accordingly.

    #lsdev -Cc Tape — configure the tape drive.

    #cfgmgr — same as above

    #cat /etc/exclude.rootvg — filesystems to exclude while taking complete system backup.

    #lsvg -l rootvg — list the volumme group called as rootvg

    #smit mksysb — the smit interface to take the system backup

    #tail smit.log — tail the log files to see smit is working fine.

    #tctl -f /dev/rmt0 rewoffl -eject — this will rewind the tape and will eject the tape device.

    #restore -tvf /dev/rmt0 — to list the contents of the tape device

    #find ./log ./out -print | backup -ivf /dev/rmt0 | tee /tmp/log — to take backup of some files from ./log and ./out directory on tape device rmt0 while logging and printing the output on the screen.

    #restore -xqdvf /dev/rmt0 — restoring the complete backup on the harddisk directory. the command must be fired from the parent directory to avoid confusions in where to restore dir. name.

    ########Grub.conf — How it works
    ####Manually loading through the bootloader.

    ###This will boot the windows partition.
    rootnoverify (hd0,0)
    makeactive
    chainloader +1
    boot

    ###booting linux fron /dev/hda3 device
    root (hd0,2)
    kernel /boot/vmlinuz root=/dev/hda3 -s
    boot
    initrd /boot/initrd

    ####SHUTTING DOWN ORACLE 9i

    1. ps -aef | grep pmon -> to check orcale instances running.
    2. sqlplus /as sysdba
    3. shutdown immediately
    4. exit
    5. ps -aef | grep ora
    6. ps -aef | grep tltns
    10. kill -9 ora9ibrn

    ### copies a single 1024 block from /dev/zero(a continuous stream of zero bytes) to the file new file.
    dd if=/dev/zero of=new_file bs=1024 count=1

    iostat -En will show the devices like c0t0d0.
    product :- the last line gives the size of the disk
    mount -F hsfs /dev/dsk/c0t0d0s0 /mnt

    To see all of the slices on all of the disks the easiest thing is:
    prtvtoc /dev/rdsk/*s2
    To see all disks do this:
    format /dev/null 2>&1 redirecting the cron log to /dev/null
    hwclock –systohc sync date with hwclock

    df -g |awk ‘{print $1}’
    df -g |awk ‘{print $7}’
    df -g |awk ‘{print $4}’

    To Change the username and home permission of a user
    groupmod -n sysadmin santosh
    usermod -d /home/sysadmin -m -g sysadmin -l sysadmin santosh

    vncserver -kill :1

    psrinfo will give number of cpus in Sun Solar
    is

    OGL Backup
    cd /oraapps/oracle/prodcomn/admin

    # find ./out ./log print | backup ivf /dev/rmtn

    pscp.exe -pw ‘password’ “local machine path” user@host:/path/to/home/

    df -g refresh
    while :^Jdo^Jdf -g /kcf1dr /kcfdrvg^Jsleep 2^Jclear^Jdone

    stopping one spd device
    setsp -T -l3

    3 is SPD number.

    TIP
    tip -9600 /dev/ttya
    tip -9600 /dev/ttyb

    changing users unsuccessful login attempt using sudo
    sudo chsec -f /etc/security/lastlog -s username -a unsuccessful_login_count=0

    mount -t ext3 -o acl

    give rwx privileges to a user which does not belong to the group
    setfacl -m u:prod:rwx test
    checked the privileges using

    getfacl -a test

    opensssl rand -base64 6
    —————————————————————————-
    Restoration of backup
    # restore -xdvgf /dev/rmtn
    n-> no. of the tape drive attached.

    To rewind and ejject the tape
    # tctl -f /dev/rmtn rewoffl

    To list the contents of the tape drive
    # restore -Tl -vf /dev/rmt0

    To check user account status like locked, unlocked and when the password expires etc.. use
    on
    AIX:
    chuser

    Solaris
    passwd -s username

    Linux
    Chage -l username

  • Building DVD Images Of Ubuntu Repositories

    1 Preliminary Note

    This tutorial was inspired by an articles I read at http://cargol.net/~ramon/ubuntu-dvd-en. So many thanks to Ramon Acedo (the one who made this HowTo, originally)

    The pages are not reachable from some weeks, now. I saved the page to read it off-line. So…

    I found it useful. I hope it will be the same for you.

    2 Introduction

    This howto offers a simple way of creating DVD images of Debian or Ubuntu http/ftp repositories.

    Ubuntu doesn’t offer DVDs ready to download with its main, universe, multiverse and/or restricted repositories. With the contents of this howto you can do it yourself.

    Having the Ubuntu or Debian repositories on DVD can be useful for those users who don’t have access to the Internet where they have their Ubuntu installed but have access somewhere else to download the repository and build and burn the DVDs.

    3 Building a local mirror

    We have to install debmirror:

    sudo apt-get install debmirror

    Now we get the Ubuntu repositories in a local directory. In the example below we get main, universe and multiverse sections of the repository in the i386 architecture.

    debmirror –nosource -m –passive –host=archive.ubuntulinux.org –root=ubuntu/ –method=ftp –progress –dist=dapper –section=main,multiverse,universe –arch=i386 ubuntu/ –ignore-release-gpg

    You could change the options below as you prefer:

    • –host – the URL of the repository.
    • –dist – the distro of your OS (dapper, edgy, sarge, … ).
    • –section – the section you want to mirror locally.
    • –arch – the architecture of your box.

    4 Separating the archive into DVD-sized directories

    The repositories we got are too big (about 30Gb) to burn them to a DVD so we have to separate them into volumes.

    The tool debpartial will do it for us.

    sudo apt-get install debpartial

    We make the directory where the volumes will reside.

    mkdir ubuntu-dvd

    and we make it to construct the package descriptors to every volume.

    debpartial –nosource –dirprefix=ubuntu –section=main,universe,multiverse –dist=dapper –size=DVD ubuntu/ ubuntu-dvd/

    Now we have to put the packages into the directories debpartial has just created. The script debcopy which also comes with the debpartial package will do it. The script needs ruby.

    sudo apt-get install ruby

    If everything is ok…

    ruby debcopy ubuntu/ ubuntu-dvd/ubuntu0
    ruby debcopy ubuntu/ ubuntu-dvd/ubuntu1
    ruby debcopy ubuntu/ ubuntu-dvd/ubuntu2

    Where ubuntu/ is the directory with the complete repository created with debmirror and ubuntu-dvd/* are the directories ready to host the new DVD-ready repository.
    If we want to make soft links from the complete repository instead of copying the packages we can call debcopy with the option -l:

    ruby -l debcopy ubuntu/ ubuntu-dvd/ubuntu0
    ruby -l debcopy ubuntu/ ubuntu-dvd/ubuntu1
    ruby -l debcopy ubuntu/ ubuntu-dvd/ubuntu2

    Now every directory (ubuntu0, ubuntu1 and ubuntu2) fits on one DVD.

    5 Making iso images

    To get the directories ubuntu0, ubuntu1, ubuntu2 into an iso image ready to burn we can use mkisofs:

    mkisofs -f -J -r -o ubuntu-dvd-0.iso ubuntu-dvd/ubuntu0
    mkisofs -f -J -r -o ubuntu-dvd-1.iso ubuntu-dvd/ubuntu1
    mkisofs -f -J -r -o ubuntu-dvd-2.iso ubuntu-dvd/ubuntu2

    Now you can burn the iso images or mount them. Add them to the /etc/apt/source.list with the command:

    sudo apt-cdrom add

    Now we can verify the new repositories…

    sudo apt-get update
    sudo apt-get upgrade

    … and, if I explain in the right way, you should have your box upgraded.

    6 About the script ‘debcopy’

    I heard about someone who can not find the script debcopy, above described.
    In that case, create a new file called debcopy where you want:

    gedit /your_path_to/debcopy

    and copy the lines below inside it:

    #!/usr/bin/ruby
    #
    # debcopy - Debian Packages/Sources partial copy tool
    #
    # Usage: debcopy [-l]  
    #
    #  where  is a top directory of a debian archive,
    #  and  is a top directory of a new debian partial archive.
    #
    #  debcopy searches all Packages.gz and Sources.gz under /dists
    #  and copies all files listed in the Packages.gz and Sources.gz
    #  files into  from . -l creates symbolic links
    #  instead of copying files.
    #
    # Copyright (C) 2002  Masato Taruishi 
    #
    #  This program is free software; you can redistribute it and/or modify
    #  it under the terms of the GNU General Public License as published by
    #  the Free Software Foundation; either version 2 of the License, or
    #  (at your option) any later version.
    #
    #  This program is distributed in the hope that it will be useful,
    #  but WITHOUT ANY WARRANTY; without even the implied warranty of
    #  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    #  GNU General Public License for more details.
    #
    #  You should have received a copy of the GNU General Public License with
    #  the Debian GNU/Linux distribution in file /usr/share/common-licenses/GPL;
    #  if not, write to the Free Software Foundation, Inc., 59 Temple Place,
    #  Suite 330, Boston, MA  02111-1307  USA
    #
    require 'getoptlong'
    require 'zlib'
    require 'ftools'
    $link = false
    def usage
      $stderr.puts "Usage: #{__FILE__} [-l]  "
       exit 1
    end
    def each (file, &block)
      fin = Zlib::GzipReader.open(file)
      fin.each do |line|
        yield line
      end
      fin.close
    end
    def each_file (file, &block)
      each(file) do |line|
        if /Filename: (.*)/ =~ line
          yield $1
        end
      end
    end
    def each_sourcefile (file, &block)
      dir = nil
      each(file) do |line|
        case line
        when /^Directory: (.*)$/
          dir = $1
        when /^ \S+ \d+ (\S+)$/
          yield dir + "/" + $1
        end
      end
    end
    def calc_relpath (source, dest)
      pwd = Dir::pwd
      Dir::chdir source
      source = Dir::pwd
      Dir::chdir pwd
      Dir::chdir dest
      dest = Dir::pwd
      Dir::chdir pwd
      src_ary = source.split("/")
      src_ary.shift
      dest_ary = dest.split("/")
      dest_ary.shift
      return dest if src_ary[0] != dest_ary[0]
      src_ary.clone.each_index do |i|
        break if src_ary[0] != dest_ary[0]
        src_ary.shift
        dest_ary.shift
      end
      src_ary.size.times do |i|
        dest_ary.unshift("..")
      end
      dest_ary.join("/")
    end
    def do_copy(path)
      if $link
        pwd=calc_relpath(File.dirname($dest_dir + "/" + path), $source_dir)
        File.symlink(pwd + "/" + path, $dest_dir + "/" + path)
      else
        File.copy($source_dir + "/" + path, $dest_dir + "/" + path)
      end
    end
    def copy(path)
      s=$source_dir + "/" + path
      d=$dest_dir + "/" + path
      if FileTest.exist?(d)
        $stats["ignore"] += 1
        return
      end
      if FileTest.exist?(s)
        File.mkpath(File.dirname(d))
        do_copy(path)
        $stats["copy"] += 1
      else
        $stats["notfound"] += 1
        $stderr.puts s + " not found."
      end
    end
    opts = GetoptLong.new(["--symlink", "-l", GetoptLong::NO_ARGUMENT],
    		      ["--help", "-h", GetoptLong::NO_ARGUMENT])
    opts.each do |opt,arg|
      case opt
      when "--symlink"
        $link = true
      when "--help"
        usage
      end
    end
    usage if ARGV.size != 2
    $source_dir = ARGV.shift
    $dest_dir = ARGV.shift
    if $link
      $source_dir = Dir::pwd + "/" + $source_dir unless $source_dir =~ /\A\//
      $dest_dir = Dir::pwd + "/" + $dest_dir unless $dest_dir =~ /\A\//
    end
    $stats = {}
    $stats["ignore"] = 0
    $stats["copy"] = 0
    $stats["notfound"] = 0
    open("|find #{$dest_dir}/dists -name Packages.gz") do |o|
      o.each_line do |file|
        file.chomp!
        print "Processing #{file}... "
        $stdout.flush
        each_file(file) do |path|
          copy(path)
        end
        puts "done"
      end
    end
    open("|find #{$dest_dir}/dists -name Sources.gz") do |o|
      o.each_line do |file|
        file.chomp!
        print "Processing #{file}... "
        $stdout.flush
        each_sourcefile(file.chomp) do |path|
          copy(path)
        end
        puts "done"
      end
    end
    puts "Number of Copied Files: " + $stats["copy"].to_s
    puts "Number of Ignored Files: " + $stats["ignore"].to_s
    puts "Number of Non-existence File: " + $stats["notfound"].to_s
    
  • How to turn on rsh and rlogin on RedHat Enterprise Linux (RHEL 2.1/ 3.0)

    Enable them:

    Turn on these three using chkconfig on both the nodes: rexec, rsh and rlogin.

    # chkconfig rexec on
    # chkconfig rsh on
    # chkconfig rlogin on

    xinetd

    Restart xinetd to be sure.

    # service xinetd restart

    .rhosts

    On hostA’s root home directory (usually /root), create a .rhosts file, which has hostB in it.

    # cat .rhosts
    hostB

    Similarly, create a .rhosts on hostB’s root home directory which has hostA in it.

    # cat .rhosts
    hostA

    hosts.allow

    Now, edit /etc/hosts.allow on hostA:

    #
    # hosts.allow This file describes the names of the hosts which are
    # allowed to use the local INET services, as decided
    # by the ‘/usr/sbin/tcpd’ server.
    #
    ALL : hostB

    Edit /etc/hosts.allow on hostB:

    #
    # hosts.allow This file describes the names of the hosts which are
    # allowed to use the local INET services, as decided
    # by the ‘/usr/sbin/tcpd’ server.
    #
    ALL : hostA

    hosts.equiv

    Edit /etc/hosts.equiv on hostA to have

    # cat /etc/hosts.equiv
    hostB

    Edit /etc/hosts.equiv on hostB to have

    # cat /etc/hosts.equiv
    hostA

    /etc/securetty

    And finally, knock off /etc/securetty (rename it or worse, purge it) on both hostA and hostB

    Now you are good to go.

    Disclaimer: Use at your own risk. Don’t flame me. It sure worked for me. Actual results may vary. Use ssh in place of rlogin/rsh/telnet and the like, as ssh is more secure.

  • Fail Login Configuration

    1. Open the /etc/pam.d/system-auth file for editing.
    ensure that a backup done for the file which you are editing.

    2. Add the following lines:

    auth required pam_tally.so no_magic_root
    account required pam_tally.so deny=2 no_magic_root

    here the value of deny implies how many login attempts should faillog wait before locking the account for login.

    3. Save the file and exit.
    4. Test the configuration by attempting to login as a normal user, but using a wrong password.
    5. Verify the failed count increments by running the command:

    faillog -u
    6. To disable faillog for one particular user faillog -m -1 -u username

  • ssh using keys.

    Here i will try to demonstrate how to use ssh keys to login to machines without password.
    Since i did not got it to work with putty now. i will do it with two unix machines and will soon continue this post on to configure it with putty.

    1. Check the ssh-server installed on your machine or not. if not download the packages openssh-clients and openssh-server from the respective downloads site.

    2. create public and private keys using ssh-keygen

    user@home$ ssh-keygen -t dsa ##this will create public and privatye keys.

    3. scp the public key to the remote host on which you want to gain the access without password.
    user@home$ scp .ssh/id_dsa.pub user@machineB:~/.ssh/authorized_keys ## from machine a to machine b.

    4. Now login from Machine A to Machine B and check. it will work without password.

    Points:
    1. you must login from the account where you have kept the private key. since, you try to login from a diferent account you private key wo0n’t be there and that time you will be thrown to a password prompt.
    2. Check the permissions of the directory .ssh to be 700 and the permission of the authorized_keys files must be 600. or else it won’t work.

  • Do we know the world.

    Indeed a very good article from Sunday Times which talks about our views of changing perception with time. How much we know and how much we have to learn. Basically, we have to see a lot more to understand.

    Written by : Shobhan Saxena
    [ 11 Feb, 2007 0046hrs ISTTIMES NEWS NETWORK ]

     Reality is a question of perspective. It depends on your location on the GPS. Earlier, people with yellow hair and blue eyes believed that all Indians had a tiger in their backyard and filthy men made venomous cobras dance. We hated this kind of Orientalism.

    We always believed that we had too much culture here and we didn't need to learn anything from anyone, at least not from the "ignorant" West which saw us as a nation of medieval freaks. Now, with the changing times, the perceptions about us have changed.

    Now the world probably thinks we all live in slums, smell of curry, speak in funny accents, work in call centres and leak customer data for money. We don't like this. We feel others do not understand us. But we seem to be more ignorant of the world than the world is of us.

    That's why when two Indian hacks go to Kabul to make a film, they get into trouble with the quintessential side-kick Arshad Warsi cracking some jokes about Afghan men liking other men and the Hazaras being ruthless barbarians who kill people by "stroking long, rusted nails into their heads".

    Funny, isn't it? Not for the Afghans who banned Kabul Express. Imagine going to Afghanistan, standing in a fallow land which has turned red due to an eternal war and indulging in some gay-bashing.
     Our angle is so skewed that we miss the complete picture: This land has been a crucible of global wars from the Great Game between the Tsars and the British, the Cold War, the bloody battles between the Russians and the Mujahideen and the ideological clashes between the leftists and the religious zealots.

    We know nothing about their music, poetry and food. We know nothing about their customs and language. The only thing we know about them is that they like to kill each other and they love to play Buzkashi, a game where wild horsemen fight over a dead goat.

    We know that much because we saw Mr Stallone playing the game in Rambo III. We understand our next-door neighbours through Hollywood.

    We cry till hoarse about the world stereotyping us as "the Indians", but the fact is that we don't understand the world as it exists. Forget Paraguay and Morocco, our understanding of China is quite warped.

    Ask an average Indian about China and he would probably say: chow mein. We see China, the world's biggest nation, as the land of noodles, fried cockroaches and snake soup.
     The middle classes may associate China with new age mumbo-jumbo like Feng Shui, Tai-chi and the Laughing Buddha, and a booming economy that shines in the Shanghai skyscrapers. But that's it. We dismiss Japan, the world's second biggest economy, in a few words: judo-karate, Su-Doku, haiku, sushi, saké, kamikazes and harakiri.

    Of course, we know about their cars and electronic watches. That's it. For us, Brazil, the biggest Latin country, that's three times the size of India, is a nation of semi-nude, samba dancers and crazy footballers. In our imagination, Argentina means Maradona. That's it.

    A nation is an imagined community. The world lives in our imagination. The "others" are imagined people. But, so limited is our imagination about the others that we don't think beyond certain stereotypes.

    We associate the Australians with kangaroos, the Russians with vodka, the French with romance, the Italians with fashion, the Latinos with sex and the Africans with HIV. And the Middle East is all about oil and beauties behind the veil. You cannot have an imagination worse than this.

    We don't know what we are missing. China's rich culture rivals ours: thoughts from Confucius to Mao Zedong, writers from Zhuang Zi to Nobel laureate Gao Xingjian, short poetry, long operas, Mandarin guitar and classical music. It's quite sickening to reduce Brazil to a carnival of hot babes on its beaches.
     It's a melting pot of cultures: from Europe, Africa, Asia and Amazon jungles. The beach is the most democratic place in Rio, where the rich and poor, homeless and intellectual, musicians and writers all meet and mix with each other.

    The country has great traditions of music and arts. And politics: one entire generation grew up fighting the military dictatorship. But we don't care to know and understand all this.

    In the age of globalisation, such a little understanding of the world is dangerous. Not for us, but for others: a white man straying into an Indian village is beaten to death for no reason; two Africans carrying meat in their bags are attacked for having "beef with them".

    It's a dangerous way of looking at other people. At one level, people are the same everywhere. They are all trapped in their human condition: living, liking and helping each other; loving, hating and destroying each other. But if we do not know the details of their life, they don't look real. They look like freaks.


    Anyone who has the power to make you believe absurdities has the power to make you commit injustices.
    <b>Voltaire </b>
    http://om-prakash.blogspot.com

  • The 10 Commands we never use.

    It takes years maybe decades to master the commands available to you at the Linux shell prompt. Here are 10 that you will have never heard of or used. They are in no particular order. My favorite is mkfifo.

    1. pgrep, instead of:
      # ps -ef | egrep '^root ' | awk '{print $2}'
      1
      2
      3
      4
      5
      20
      21
      38
      39
      ...

      You can do this:

      # pgrep -u root
      1
      2
      3
      4
      5
      20
      21
      38
      39
      ...
    2. pstree, list the processes in a tree format. This can be VERY useful when working with WebSphere or other heavy duty applications.
      # pstree
      init-+-acpid
      |-atd
      |-crond
      |-cups-config-dae
      |-cupsd
      |-dbus-daemon-1
      |-dhclient
      |-events/0-+-aio/0
      | |-kacpid
      | |-kauditd

      | |-kblockd/0
      | |-khelper
      | |-kmirrord
      | `-2*[pdflush]
      |-gpm
      |-hald
      |-khubd
      |-2*[kjournald]
      |-klogd
      |-kseriod

      |-ksoftirqd/0
      |-kswapd0
      |-login---bash
      |-5*[mingetty]
      |-portmap
      |-rpc.idmapd
      |-rpc.statd
      |-2*[sendmail]
      |-smartd
      |-sshd---sshd---bash---pstree

      |-syslogd
      |-udevd
      |-vsftpd
      |-xfs
      `-xinetd
    3. bc is an arbitrary precision calculator language. Which is great. I found it useful in that it can perform square root operations in shell scripts. expr does not support square roots.
      # ./sqrt
      Usage: sqrt number
      # ./sqrt 64
      8
      # ./sqrt 132112
      363
      # ./sqrt 1321121321
      36347

      Here is the script:

      # cat sqrt
      #!/bin/bash
      if [ $# -ne 1 ]
      then
      echo 'Usage: sqrt number'
      exit 1
      else
      echo -e "sqrt($1)\nquit\n" | bc -q -i
      fi
    4. split, have a large file that you need to split into smaller chucks? A mysqldump maybe? split is your command. Below I split a 250MB file into 2 megabyte chunks all starting with the prefix LF_.
      # ls -lh largefile
      -rw-r--r-- 1 root root 251M Feb 19 10:27 largefile
      # split -b 2m largefile LF_
      # ls -lh LF_* | head -n 5
      -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_aa
      -rw-r--r-- 1 root root 2.0M
      Feb 19 10:29 LF_ab
      -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ac
      -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ad
      -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ae
      # ls -lh LF_* | wc -l
      126
    5. nl numbers lines. I had a script doing this for me for years until I found out about nl.
      # head wireless.h
      /*
      * This file define a set of standard wireless extensions
      *
      * Version : 20 17.2.06
      *
      * Authors : Jean Tourrilhes - HPL
      * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.

      */

      #ifndef _LINUX_WIRELESS_H
      # nl wireless.h | head
      1 /*
      2 * This file define a set of standard wireless extensions
      3 *
      4 * Version : 20 17.2.06
      5 *
      6 * Authors : Jean Tourrilhes - HPL
      7 * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.
      8 */

      9 #ifndef _LINUX_WIRELESS_H
    6. mkfifo is the coolest one. Sure you know how to create a pipeline piping the output of grep to less or maybe even perl. But do you know how to make two commands communicate through a named pipe?

      First let me create the pipe and start writing to it:

      mkfifo pipe; tail file > pipe

      Then read from it:

      cat pipe

    7. ldd, want to know which Linux thread library java is linked to?
      # ldd /usr/java/jre1.5.0_11/bin/java
      libpthread.so.0 => /lib/tls/libpthread.so.0 (0x00bd4000)
      libdl.so.2 => /lib/libdl.so.2 (0x00b87000)
      libc.so.6 => /lib/tls/libc.so.6 (0x00a5a000)

      /lib/ld-linux.so.2 (0x00a3c000)
    8. col, want to save man pages as plain text?
      # PAGER=cat
      # man less | col -b > less.txt
    9. xmlwf, need to know if a XML document is well formed? (A configuration file maybe..)
      # curl -s 'http://bashcurescancer.com' > bcc.html
      # xmlwf bcc.html
      # perl -i -pe 's@<br/>@<br>@g' bcc.html
      # xmlwf bcc.html
      bcc.html
      :104:2: mismatched tag
    10. lsof lists open files. You can do all kinds of cool things with this. Like find which ports are open:
      # lsof | grep TCP
      portmap 2587 rpc 4u IPv4 5544 TCP *:sunrpc (LISTEN)
      rpc.statd 2606 root 6u IPv4 5585 TCP *:668 (LISTEN)
      sshd 2788 root 3u IPv6 5991 TCP *:ssh (LISTEN)

      sendmail 2843 root 4u IPv4 6160 TCP badhd:smtp (LISTEN)
      vsftpd 9337 root 3u IPv4 34949 TCP *:ftp (LISTEN)
      cupsd 16459 root 0u IPv4 41061 TCP badhd:ipp (LISTEN)

      sshd 16892 root 3u IPv6 61003 TCP badhd.mshome.net:ssh->kontiki.mshome.net:4661 (ESTABLISHED)

      Or find the number of open files a user has. Very important for running big applications like Oracle, DB2, or WebSphere:

      # lsof | grep ' root ' | awk '{print $NF}' | sort | uniq | wc -l
      179

  • A myth Called the Indian Software Programmer.

    This article has been taken from sunday times – mumbai edition dt: 18/02/2006

    I am posting it here as it gives a lot of meaning to the indian software industry and the boom which we had seen in the past because of this.

    They are the poster boys of matrimonial classifieds. They are paid handsomely, perceived to be intelligent and travel abroad frequently. Single-handedly, they brought purpose to the otherwise sleepy city of Bangalore.

    Indian software engineers are today the face of a third-world rebellion. But what exactly do they do? That’s a disturbing question. Last week, during the annual fair of the software industry’s apex body Nasscom, no one uttered a word about India’s programmers.

    The event, which brought together software professionals from around the world, used up all its 29 sessions to discuss prospects to improve the performance of software companies. Panels chose to debate extensively on subjects like managing innovation, business growth and multiple geographies.

    But there was nothing on programmers, who you would imagine are the driving force behind the success of the Indian software companies. Perhaps you imagined wrong. “It is an explosive truth that local software companies won’t accept.

    Most software professionals in India are not programmers, they are mere coders,” says a senior executive from a global consultancy firm, who has helped Nasscom in researching its industry reports.

    In industry parlance, coders are akin to smart assembly line workers as opposed to programmers who are plant engineers. Programmers are the brains, the glorious visionaries who create things. Large software programmes that often run into billions of lines are designed and developed by a handful of programmers.

    Coders follow instructions to write, evaluate and test small components of the large program. As a computer science student in IIT Mumbai puts it if programming requires a post graduate level of knowledge of complex algorithms and programming methods, coding requires only high school knowledge of the subject.

    Coding is also the grime job. It is repetitive and monotonous. Coders know that. They feel stuck in their jobs. They have fallen into the trap of the software hype and now realise that though their status is glorified in the society, intellectually they are stranded.
    Companies do not offer them stock options anymore and their salaries are not growing at the spectacular rates at which they did a few years ago.

    “There is nothing new to learn from the job I am doing in Pune. I could have done it with some training even after passing high school,” says a 25-year-old who joined Infosys after finishing his engineering course in Nagpur.

    A Microsoft analyst says, “Like our manufacturing industry, the Indian software industry is largely a process driven one. That should speak for the fact that we still don’t have a domestic software product like Yahoo or Google to use in our daily lives.”

    IIT graduates have consciously shunned India’s best known companies like Infosys and TCS, though they offered very attractive salaries. Last year, from IIT Powai, the top three Indian IT companies got just 10 students out of the 574 who passed out.

    The best computer science students prefer to join companies like Google and Trilogy. Krishna Prasad from the College of Engineering, Guindy, Chennai, who did not bite Infosys’ offer, says, “The entrance test to join TCS is a joke compared to the one in Trilogy. That speaks of what the Indian firms are looking for.”

    A senior TCS executive, who requested anonymity, admitted that the perception of coders is changing even within the company. It is a gloomy outlook. He believes it has a lot to do with business dynamics.

    The executive, a programmer for two decades, says that in the late ’70s and early ’80s, software drew a motley set of professionals from all kinds of fields.

    In the mid-’90s, as onsite projects increased dramatically, software companies started picking all the engineers they could as the US authorities granted visas only to graduates who had four years of education after high school.
    “After Y2K, as American companies discovered India’s cheap software professionals, the demand for engineers shot up,” the executive says. Most of these engineers were coders. They were almost identical workers who sat long hours to write line after line of codes, or test a fraction of a programme.

    They did not complain because their pay and perks were good. Now, the demand for coding has diminished, and there is a churning.

    Over the years, due to the improved communication networks and increased reliability of Indian firms, projects that required a worker to be at a client’s site, say in America, are dwindling in number. And with it the need for engineers who have four years of education after high school.

    Graduates from non-professional courses, companies know, can do the engineer’s job equally well. Also, over the years, as Indian companies have already coded for many common applications like banking, insurance and accounting, they have created libraries of code which they reuse.

    Top software companies have now started recruiting science graduates who will be trained alongside engineers and deployed in the same projects. The CEO of India’s largest software company TCS, S Ramadorai, had earlier explained, “The core programming still requires technical skills.

    But, there are other jobs we found that can be done by graduates.” NIIT’s Arvind Thakur says, “We have always maintained that it is the aptitude and not qualifications that is vital for programming. In fact, there are cases where graduate programmers have done better than the ones from the engineering stream.”

    Software engineers, are increasingly getting dejected. Sachin Rao, one of the coders stuck in the routine of a job that does not excite him anymore, has been toying with the idea of moving out of Infosys but cannot find a different kind of “break”, given his coding experience.

    He sums up his plight by vaguely recollecting a story in which thousands of caterpillars keep climbing a wall, the height of which they don’t know. They clamber over each other, fall, start again, but keep climbing. They don’t know that they can eventually fly.

    Rao cannot remember how the story ends but feels the coders of India today are like the caterpillars who plod their way through while there are more spectacular ways of reaching the various destinations of life.

  • Remote Logins – Telnet

    An answer found from Linux Gazette for the question on Remote Logins and su.

    Q. i am running red hat linux 6.1 and am encountering some problems i can login as root from the console but not from anywhere else i have to login as webmaster on all other machines on ntwk from nowhere, including the console, can i su once logged in as webmaster any help would be appreciated

    Ans. :
    Any of these should allow you to access your system through cryptographically secured authentication and session protocols that protect you from a variety of sniffing, spoofing, TCP hijacking and other vulnerabilties that are common using other forms of remote shell access (such as telnet, and the infamous rsh and rlogin packages).

    If you really insist on eliminating these policies from your system you can edit files under /etc/pam.d that are used to configure the options and restrictions of the programs that are compiled against the PAM (pluggable authentication modules) model and libraries. Here’s an example of one of them (/etc/pam.d/login which is used by the in.telnetd service):

    #
    # The PAM configuration file for the Shadow `login' service
    #
    # NOTE: If you use a session module (such as kerberos or NIS+)
    # that retains persistent credentials (like key caches, etc), you
    # need to enable the `CLOSE_SESSIONS' option in /etc/login.defs
    # in order for login to stay around until after logout to call
    # pam_close_session() and cleanup.
    #

    # Outputs an issue file prior to each login prompt (Replaces the
    # ISSUE_FILE option from login.defs). Uncomment for use
    # auth required pam_issue.so issue=/etc/issue

    # Disallows root logins except on tty's listed in /etc/securetty
    # (Replaces the `CONSOLE' setting from login.defs)
    auth requisite pam_securetty.so

    # Disallows other than root logins when /etc/nologin exists
    # (Replaces the `NOLOGINS_FILE' option from login.defs)
    auth required pam_nologin.so

    # This module parses /etc/environment (the standard for setting
    # environ vars) and also allows you to use an extended config
    # file /etc/security/pam_env.conf.
    # (Replaces the `ENVIRON_FILE' setting from login.defs)
    auth required pam_env.so

    # Standard Un*x authentication. The "nullok" line allows passwordless
    # accounts.
    auth required pam_unix.so nullok

    # This allows certain extra groups to be granted to a user
    # based on things like time of day, tty, service, and user.
    # Please uncomment and edit /etc/security/group.conf if you
    # wish to use this.
    # (Replaces the `CONSOLE_GROUPS' option in login.defs)
    # auth optional pam_group.so

    # Uncomment and edit /etc/security/time.conf if you need to set
    # time restrainst on logins.
    # (Replaces the `PORTTIME_CHECKS_ENAB' option from login.defs
    # as well as /etc/porttime)
    # account requisite pam_time.so

    # Uncomment and edit /etc/security/access.conf if you need to
    # set access limits.
    # (Replaces /etc/login.access file)
    # account required pam_access.so

    # Standard Un*x account and session
    account required pam_unix.so
    session required pam_unix.so

    # Sets up user limits, please uncomment and read /etc/security/limits.conf
    # to enable this functionality.
    # (Replaces the use of /etc/limits in old login)
    # session required pam_limits.so

    # Prints the last login info upon succesful login
    # (Replaces the `LASTLOG_ENAB' option from login.defs)
    session optional pam_lastlog.so

    # Prints the motd upon succesful login
    # (Replaces the `MOTD_FILE' option in login.defs)
    session optional pam_motd.so

    # Prints the status of the user's mailbox upon succesful login
    # (Replaces the `MAIL_CHECK_ENAB' option from login.defs). You
    # can also enable a MAIL environment variable from here, but it
    # is better handled by /etc/login.defs, since userdel also uses
    # it to make sure that removing a user, also removes their mail
    # spool file.
    session optional pam_mail.so standard noenv

    # The standard Unix authentication modules, used with NIS (man nsswitch) as
    # well as normal /etc/passwd and /etc/shadow entries. For the login service,
    # this is only used when the password expires and must be changed, so make
    # sure this one and the one in /etc/pam.d/passwd are the same. The "nullok"
    # option allows users to change an empty password, else empty passwords are
    # treated as locked accounts.
    #
    # (Add `md5' after the module name to enable MD5 passwords the same way that
    # `MD5_CRYPT_ENAB' would do under login.defs).
    #
    # The "obscure" option replaces the old `OBSCURE_CHECKS_ENAB' option in
    # login.defs. Also the "min" and "max" options enforce the length of the
    # new password.

    password required pam_unix.so nullok obscure min=4 max=8

    # Alternate strength checking for password. Note that this
    # requires the libpam-cracklib package to be installed.
    # You will need to comment out the password line above and
    # uncomment the next two in order to use this.
    # (Replaces the `OBSCURE_CHECKS_ENAB', `CRACKLIB_DICTPATH')
    #
    # password required pam_cracklib.so retry=3 minlen=6 difok=3
    # password required pam_unix.so use_authtok nullok md5

    This is from Debian machine (mars.starshine.org) and thus has far more comments (all those lines starting with “#” hash marks) than those that Red Hat installs. It’s good that Debian comments these files so verbosely, since that’s practically the only source of documentation for PAM files and modules.

    In this case the entry that you really care about is the one for ‘securetty.so’ This module checks the file /etc/securetty which is classically a list of those terminals on which your system will allow direct root logins.

    You could comment out this line in /etc/pam.d/login to disable this check for those services which call the /bin/login command. You can look for similar lines in the various other /etc/pam.d files so see which other services are enforcing this policy.

    This leads us to the question of why your version of ‘su’ is not working. Red Hat’s version of ‘su’ is probably also “PAMified” (almost certainly, in fact). So there should be a /etc/pam.d/su file that controls the list of policies that your copy of ‘su’ is checking. You should look through that to see why ‘su’ isn’t allowing your ‘webmaster’ account to become ‘root’.

    It seems quite likely that your version of Red Hat contains a line something like:

    # Uncomment this to force users to be a member of group root
    # before than can use `su'. You can also add "group=foo" to
    # to the end of this line if you want to use a group other
    # than the default "root".
    # (Replaces the `SU_WHEEL_ONLY' option from login.defs)
    auth required pam_wheel.so

    Classically the ‘su’ commands on most versions of UNIX required that a user be in the “wheel” group in order to attain ‘root’ The traditional GNU implementation did not enforce this restriction (since rms found it distasteful).

    On my system this line was commented out (which is presumably the Debian default policy, since I never fussed with that file on my laptop). I’ve uncommented here for this exa
    mple.

    Note that one of the features of PAM is that it allows you to specify any group using a command line option. It defaults to “wheel” because that is an historical convention. You can also use the pam_wheel.so module on any of the PAMified services --- so you could have programs like ‘ftpd’ or ‘xdm’ enforce a policy that restricted their use to members of arbitrary groups.

    Finally note that most recent versions of SSH have PAM support enabled when they are compiled for Linux systems. Thus you may find, after you install any version of SSH, that you have an /etc/pam.d/ssh file. You may have to edit that to set some of your preferred SSH policies. There is also an sshd_config file (mine’s in /etc/ssh/sshd_config) that will allow you to control other ssh options).

    In generall the process of using ssh works something like this:

    1. Install the sshd (daemon) package on your servers (the systems that you want to access)
    2. Install the ssh client package on your clients (the systems from which you’d like to initiate your connections).
    3. Generate Host keys on all of these systems (normally done for you by the installation).

    …. you could stop at this point, and just start using the ssh and slogin commands to access your remote accounts using their passwords. However, for more effective and convenient use you’d also:

    1. Generate personal key pairs for your accounts.
    2. Copy/append the identity.pub (public) keys from each of your client accounts into the ~/.ssh/authorized_keys files on each of the servers.

    This allows you to access those remote accounts without using your passwords on them. (Actually sshd can be configured to require the passwords AND/OR the identity keys, but the default is to allow access without a password if the keys work).

    Another element you should be aware of is the “passphrases” and the ssh-agent. Basically it is normal to protect your private key with a passphrase. This is sort of like a password --- but it is used to decrypt or “unlock” your private key. Obviously there isn’t much added convenience if you protect your private key with a passphrase so that you have to type that every time you use an ssh/slogin or scp (secure remote copy) command.

    ssh-agent allows you to start a shell or other program, unlock your identity key (or keys), and have all of the ssh commands you run from any of the descendents of that shell or program automatically use any of those unlocked keys. (The advantage of this is that the agent automatically dies when you exit the shell program that you started. That automatically “locks” the identity --- sort of.

    There are alot of other aspects to ssh. It can be used to create tunnels, through which one can use all sorts of traffic. People have created PPP/TCP/IP tunnels that run through ssh tunnels to support custom VPNs (virtual private networks). When run under X, ssh automatically performs “X11 forwarding” through one of the these tunnels. This is particularly handy for running X clients on remote systems beyond a NAT (IP Masquerading) router or through a proxying firewall.

    In other words ssh is a very useful package quite apart from its support for cryptographic authentication and encryption.

    In fairness I should point out that there are a number of alternatives to ssh. Kerberos is a complex and mature suite of protocols for performing authentication and encryption. STEL is a simple daemon/client package which functions just like telnetd/telnet --- but with support for encrypted sessions. And there are SSL enabled versions telnet and ftp daemons and clients.

  • How do I lock out a user after a set number of login attempts?

    The PAM (Pluggable Authentication Module) module pam_tally keeps track of unsuccessful login attempts then disables user accounts when a preset limit is reached. This is often referred to as account lockout.

    To lock out a user after 4 attempts, two entries need to be added in the /etc/pam.d/system-auth file:

    auth        required        /lib/security/$ISA/pam_tally.so onerr=fail no_magic_root
    account required /lib/security/$ISA/pam_tally.so deny=3 no_magic_root reset

    The options used above are described below:

    • onerr=fail
      If something strange happens, such as unable to open the file, this determines how the module should react.
    • no_magic_root
      This is used to indicate that if the module is invoked by a user with uid=0, then the counter is incremented. The sys-admin should use this for daemon-launched services, like telnet/rsh/login.
    • deny=3The deny=3 option is used to deny access if tally for this user exceeds 3.
    • reset
      The reset option instructs the module to reset count to 0 on successful entry.

    See below for a complete example of implementing this type of policy:

    auth        required      /lib/security/$ISA/pam_env.so
    auth required /lib/security/$ISA/pam_tally.so onerr=fail
    no_magic_root
    auth sufficient /lib/security/$ISA/pam_unix.so likeauth nullok
    auth required /lib/security/$ISA/pam_deny.so
    account required /lib/security/$ISA/pam_unix.so
    account required /lib/security/$ISA/pam_tally.so deny=5
    no_magic_root reset
    password requisite /lib/security/$ISA/$ISA/pam_cracklib.so retry=3
    password sufficient /lib/security/$ISA/$ISA/pam_unix.so nullok use_authtok md5 shadow password
    required /lib/security/$ISA/$ISA/pam_deny.so session
    required /lib/security/$ISA/$ISA/pam_limits.so session
    required /lib/security/$ISA/$ISA/pam_unix.so

    For more detailed information on the PAM system please see the documentation contained under /usr/share/doc/pam-

    For information on how to unlock a user that has expired their deny tally see additional Knowledgebase articles regarding unlocking a user account and seeing failed logins with the faillog command.

    contributed by David Robinson