Friday, July 22, 2011

http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch07_:_The_Linux_Boot_Process

http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch07_:_The_Linux_Boot_Process

RPM usage

# rpm -ivh foo-2.0-4.i386.rpm

# rpm -i ftp://ftp.redhat.com/pub/redhat/RPMS/foo-1.0-1.i386.rpm

# rpm -i http://oss.oracle.com/projects/firewire/dist/files/kernel-2.4.20-18.10.1.i686.rpm

Used to install a RPM package. Note that RPM packages have file naming conventions like foo-2.0-4.i386.rpm , which include the package name (foo), version (2.0), release (4), and architecture (i386). Also notice that RPM understands FTP and HTTP protocols for installing and querying remote RPM files.

# rpm -e foo

To uninstall a RPM package. Note that we used the package name foo , not the name of the original package file foo-2.0-4.i386.rpm above.
# rpm -Uvh foo-1.0-2.i386.rpm

# rpm -Uvh ftp://ftp.redhat.com/pub/redhat/RPMS/foo-1.0-1.i386.rpm

# rpm -Uvh http://oss.oracle.com/projects/firewire/dist/files/kernel-2.4.20-18.10.1.i686.rpm

To upgrade a RPM package. Using this command, RPM automatically uninstall the old version of the foo package and install the new package. It is safe to always use rpm -Uvh to install and upgrade packages, since it works fine even when there are no previous versions of the package installed! Also notice that RPM understands FTP and HTTP protocols for upgrading from remote RPM files.
# rpm -qa

To query all installed packages . This command will print the names of all installed packages installed on your Linux system.
# rpm -q foo

To query a RPM package . This command will print the package name, version, and release number of the package foo only if it is installed. Use this command to verify that a package is or is not installed on your Linux system.
# rpm -qi foo

To display package information . This command display package information including the package name, version, and description of the installed program. Use this command to get detailed information about the installed package.
# rpm -ql foo

To list files in installed package . This command will list all of files in an installed RPM package. It works only when the package is already installed on your Linux system.

Thursday, July 7, 2011

Source Code Browsing In Linux-CTAGS usage

How to use ctags command in linux? explained in simple steps.
CTAGS is a good utility for source code browsing in Linux through vi editor. To use ctags , at the source code directory enter

# ctags -uR *

This will create tags file in the source code.

Open a file in that source code using vi editor & set the tags file for that file using " ESC + :set tags=tags" ( path of tags file . if it is at different level then give the entire path like ../tags or ../../tags ).

Once the tag file is set for that file, then you can look the at function definitions, variable declarations, .......... by placing the cursor on that function call or variable & press "CRTL + }" and for coming back to same place press "CTRL + t" key sequence.

Follow CTAGS documentation for more insigh

Steps For Generating Document On Doxygen

Generating Document using doxygen

In simple Steps.
1) In every (.c or .h) file put this comment after includes.
---------------------------------------------------------------------
/** \file filename.ext
* \brief some notes about this file.
*
* A more extensive description of this file.
*/

Example:
/** \file function.h
* \brief This file contains prototypes for functions defined in function.c.
*
* This file contains prototypes for functions defined in function.c.
*
*/

NOTE: Period '.' is necessary after \brief sentence.

2) Before every function put this comment:
---------------------------------------------------
/** \brief A brief description of my_function().
*
* A more extensive description of my_function().
*
* \param aParameter A brief description of aParameter.
* \param bParameter B brief description of bParameter.
* \return A brief description of what myProcedure() returns.
*/

Example:
/** \brief get position of field in configfile.
*
* This function returns the position of the
* field in the config file………………….
* ………………………………………....
*
* \param i index of field.
*
* \return returns position of the field in configfile.
*
*/

3)Just before each variable, put these comments.
------------------------------------------------------------
Example:
/** \brief A brief description of myVariable.
*
* A more extensive description of myVariable.
*/

int myVariable;

Note: use it only for important variables:

4)Just before each enumeration put these comments.
----------------------------------------------------------------
/**
* \enum some description about this enum.
*/

Beside each field write some description inside /**< description */
Example:
enum Fields {
Factory_State, /**< Factory State Flag */
Login_Password, /**< Password required to login */
Model_Name, /**< Model Name of the SerialServer */
MAC_Address, /**< MAC Address of the SerialServer */
. . . .
};

5)Just before each structure put these comments ------------------------------------------------------------
/**
* \struct some description about this enum.
*/
beside each field write some description inside /**< description */
Example:
struct nw2serial_s {
struct mcs7840 * mcs7840_dev; /**< serial device structure */
spinlock_t lock; /**< spinlock for list operations */
unsigned char number; /**< unknown */
. . . . .
};
Note: for enum and typedef just change the \struct tag to \enum or \typedef.

Project Description for main page:
--------------------------------------------
In main file. keep this description at the beginning of file.
/*
* \mainpage
* write description here.
*
*
*
*/

For TODO put this comment:
-----------------------------------
/**
* \todo keep todo description here.
*
*/

For more doxygen tags manual Creating documents:

cd to the directory containing source files and type.

$doxygen –g

example:

$doxygen -g ssdoc

This will create a configuration file called ssdoc.

Open ssdoc and modify it.

Set the tags PROJECT_NAME , PROJECT_NUMBER

Example:

PROJECT_NAME = MCS8140-SS-16S

PROJECT_NUMBER = 1.0.0.2

If you want header, footer and CSS files then run

$doxygen –w html header.html footer.html stylesheet.css

This will create header.html , footer.html and doxygen.css files,
now set paths for HTML_HEADER, HTML_FOOTER, and HTML_STYLESHEET tags in configuration file (ssdoc)
example

HTML_HEADER = header.html
HTML_FOOTER = footer.html
HTML_STYLESHEET = doxygen.css

finally run doxygen.

$doxygen ssdoc

This will create two folders ./html and ./latex

For html document open ./html/index.html file.

To get an image above the document modify the header.html file and run doxygen again.
Example:
Modify css to get different colors.

Creating PDF document:
--------------------------------------
To create pdf document just cd to ./latex folder
and type

$make

This will create a pdf refman.pdf in ./latex folder.

Wednesday, July 6, 2011

How to change MAC address in Linux

First find the physical MAC address of your machine by running the following command :

$ ifconfig -a | grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:80:48:BA:d1:20


The hexadecimal numbers in blue denote my machine's MAC address. Yours will be different. Learn how to use the ifconfig Linux command.

Next, login as root in Linux and enter the following commands -

# ifconfig eth0 down
# ifconfig eth0 hw ether 00:80:48:BA:d1:30
# ifconfig eth0 up
# ifconfig eth0 |grep HWaddr


Note above that I have changed the MAC address to a different number highlighted in blue. 00:80:48:BA:d1:30 is the new MAC address I have provided for my Linux machine. You can choose any 48 bits hexadecimal address as your MAC address.

Why you should change MAC address of your Linux machine

These are the reasons you should change the MAC address of your machine.

* For privacy - For instance when you are connecting to a Wi-Fi hotspot.
* To ensure interoperability. Some internet service providers bind their service to a specific MAC address; if the user then changes their network card or intends to install a router, the service won't work anymore. Changing the MAC address of the new interface will solve the problem.

Saturday, July 2, 2011

ping

One of the most recognized utilities is the ping command. The ping command
can be used in your IP network to assist in determining whether or not an IP
addressed node is reachable.
ping sends an echo request within an Internet Control Message Protocol
(ICMP) packet. Once the echo request has been sent, the device that sent the
ping will monitor for a reply to the echo request. Once the reply is received,
the results are measured and the following statistics are recorded and printed
on the screen:
Packet loss (if any)
The time it takes for the data to make a round trip (to and from the
destination or target node)
Statistics gathered during the ping session
Here is an example of a typical11 successful ping session:
C:\>ping 64.233.167.99
Pinging 64.233.167.99 with 32 bytes of data:
Reply from 64.233.167.99: bytes=32 time=44ms TTL=235
Reply from 64.233.167.99: bytes=32 time=38ms TTL=235
Reply from 64.233.167.99: bytes=32 time=37ms TTL=236
Reply from 64.233.167.99: bytes=32 time=37ms TTL=235
Ping statistics for 64.233.167.99:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 37ms, Maximum = 44ms, Average = 39ms
In the example, the host issues an echo request to the target IP and
received the reply. The reply was 32 bytes in size. There were a total of four
echo request packets sent with 100 percent success. The average round trip
was 39 ms.
Unfortunately, because of our friends the ‘‘Ker’’ brothers (see Chapter 14),
many network administrators are now setting filters to not accept the IGMP
echo request packets. This choice is mainly because of the growing con-
cern of Internet worms that use ping to locate nodes that they can attack.
By not accepting the echo requests, the node is less vulnerable to attacks
than if it did accept them. This makes the ping utility useless when try-
ing to troubleshoot issues with the filtered interface and therefore may lead
to misleading diagnosis of problems in the network. Also keep in mind that
filtering these packets is only an annoyance for the Kers . . . they can still get to
the interface if they really want to.
The format of the ICMP echo request and reply packets are shown in
Figure 16-1.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
Type Code Checksum
Identifier Sequence Number
Data
Figure 16-1 The ICMP echo reply/request datagram format
As mentioned already, the ICMP echo reply is returned for any ICMP echo
requests that are sent to the target node. The target node must respond to
echo requests when it can, and the reply will contain the data that was sent
to it from the originating node.
The echo request datagram type will be set to 8.12
The echo reply will have a datagram type set to 0.
The code field will be set to 0 for both the request and reply.
The Identifier and the Sequence number fields are used to
ensure that the proper reply is sent to the proper request.
The data field in the request and reply must contain the same data.
The ping command will also give you an idea of what the problem may be
when you are not able to get a valid response as shown in the example below.
The two error messages that you may receive when you are not able to reach
your target are:
Request timed out: There was no reply from the host.
Destination host unreachable: There is no route to the destination.

Does the clone() system call ultimately rely on fork functionality?

The critical difference is that fork creates a new address space, while clone optionally shares the address space between the parent and child, as well as file handles and so forth.



Actually, at the conceptual level, the Linux kernel doesn't know anything about processes or threads, it only knows about "tasks".

A Linux task can be a process, a thread or something in between. (Incidentally, this means that the strange children that vfork() creates fit perfectly well into the Linux "task" paradigm).

Now, tasks can share several things, see all the CLONE_* flags in the manpage for clone(2). (Not all these flags can be described as sharing, some specify more complex behaviours).

Or new tasks can choose to have their own copies of the respective resources. And since 2.6.16, they can do so after having been started, see unshare(2).

For instance, the only difference between a vfork() and a fork() call, is that vfork() has CLONE_VM and CLONE_VFORK set. CLONE_VM makes it share its parent's memory (the same way threads share memory), while CLONE_VFORK makes the parent block until the child releases its memory mappings (by calling execve() or _exit()).

Note that Linux is not the only OS to generalize processes and threads in this manner. Plan 9 has rfork().


fork()-->C_lib-->sys_fork()-->do_fork()

vfork()-->C_lib-->sys_vfork()-->do_fork()

clone()-->C_lib-->sys_clone()-->do_fork()

Monday, June 27, 2011

Adding a Linux device driver for Target Board

Adding a Linux device driver

On Linux systems, device drivers are typically distributed in one of three forms:

* A patch against a specific kernel version
* A loadable module
* An installation script that applies appropriate patches

The most common of all these is the patch against a specific kernel version. These patches can in most cases be applied with the following procedure:2

# cd /usr/src/linux ; patch -p1 < patch_file Diffs made against a different minor version of the kernel may fail, but the driver should still work. Here, we cover how to manually add a network "snarf" driver to the kernel. It's a very complicated and tedious process, especially when compared to other operating systems we've seen. By convention, Linux kernel source resides in /usr/src/linux. Within the drivers subdirectory, you'll need to find the subdirectory that corresponds to the type of device you have. A directory listing of drivers looks like this: % ls -F /usr/src/linux/drivers Makefile cdrom/ i2o/ nubus/ sbus/ telephony/ acorn/ char/ isdn/ parport/ scsi/ usb/ ap1000/ dio/ macintosh/ pci/ sgi/ video/ atm/ fc4/ misc/ pcmcia/ sound/ zorro/ block/ i2c/ net/ pnp/ tc/ The most common directories to which drivers are added are block, char, net, usb, sound, and scsi. These directories contain drivers for block devices (such as IDE disk drives), character devices (such as serial ports), network devices, USB devices, sound cards, and SCSI cards, respectively. Some of the other directories contain drivers for the buses themselves (e.g., pci, nubus, and zorro); it's unlikely that you will need to add drivers to these directories. Some directories contain platform-specific drivers, such as macintosh, acorn, and ap1000. Some directories contain specialty devices such as atm, isdn, and telephony. Since our example device is a network-related device, we will add the driver to the directory drivers/net. We'll need to modify the following files: * drivers/net/Makefile, so that our driver will be compiled * drivers/net/Config.in, so that our device will appear in the config options * drivers/net/Space.c, so that the device will be probed on startup After putting the .c and .h files for the driver in drivers/net, we'll add the driver to drivers/net/Makefile. The lines we'd add (near the end of the file) follow. ifeq ($(CONFIG_SNARF),y) L_OBJS += snarf.o else ifeq ($(CONFIG_SNARF),m) M_OBJS += snarf.o endif endif This configuration adds the snarf driver so that it can be either configured as a module or built into the kernel. After adding the device to the Makefile, we have to make sure we can configure the device when we configure the kernel. All network devices need to be listed in the file drivers/net/Config.in. To add the device so that it can be built either as a module or as part of the kernel (consistent with what we claimed in the Makefile), we add the following line: tristate 'Snarf device support' CONFIG_SNARF The tristate keyword means you can build the device as a module. If the device cannot be built as a module, use the keyword bool instead of tristate. The next token is the string to display on the configuration screen. It can be any arbitrary text, but it should identify the device that is being configured. The final token is the configuration macro. This token needs to be the same as that tested for with the ifeq clause in the Makefile. The last file we need to edit to add our device to the system is drivers/net/Space.c. Space.c contains references to the probe routines for the device driver, and it also controls the device probe order. Here, we'll have to edit the file in two different places. First we'll add a reference to the probe function, then we'll add the device to the list of devices to probe for. At the top of the Space.c file are a bunch of references to other probe functions. We'll add the following line to that list: extern int snarf_probe(struct device *); Next, to add the device to the actual probe list, we need to determine which list to add it to. A separate probe list is kept for each type of bus (PCI, EISA, SBus, MCA, ISA, parallel port, etc.). The snarf device is a PCI device, so we'll add it to the list called pci_probes. The line that says struct devprobe pci_probes[] __initdata = { is followed by an ordered list of devices. The devices higher up in the list are probed first. Probe order does not usually matter for PCI devices, but some devices are sensitive. Just to be sure the snarf device is detected, we'll add it to the top of the list: struct devprobe pci_probes[] __initdata = { #ifdef CONFIG_SNARF snarf_probe, 0}, #endif The device has now been added to the Linux kernel. When we next configure the kernel, the device should appear as a configuration option under "network devices." Adding new kernel module to linux source tree Posted on October 24, 2010 by Ravi Teja G Kernel modules in linux are important programs that can be loaded and unloaded at wish without having to compile them into linux kernel image itself. All the device drivers are written using these loadable modules. Let us add a very basic sample kernel module. Add this file to drivers/misc directory in linux kernel source. drivers/misc/hello_world.c view source print? 01 #include
02 #include
03
04 static int __init hello_world_module_init(void)
05 {
06 printk("Hello World, sample module is installed!\n");
07 return 0;
08 }
09
10 static void __exit hello_world_module_cleanup(void)
11 {
12 printk("Good-bye, sample module was removed!\n");
13 }
14
15 module_init(hello_world_module_init);
16 module_exit(hello_world_module_cleanup);
17 MODULE_LICENSE("GPL");

Next we have to add configuration settings, so that our module can be enabled or disabled. Add these lines to drivers/misc/Kconfig file,
view source
print?
1 config HELLO_WORLD_MODULE
2 tristate "hello world module"
3 depends on ARM
4 default m if ARM
5 help
6 hello world module.

Line 3 states that this option can only be enabled if CONFIG_ARM is enabled and Line 4 states that this option should be enabled as default when CONFIG_ARM is enabled. Next We have to inform kernel to compile hello_world.c when HELLO_WORLD_MODULE configuration is enabled. Add this line to drivers/misc/Makefile,
view source
print?
1 obj-$(CONFIG_HELLO_WORLD_MODULE)+= hello_world.o

We have successfully added a new module to linux kernel. Now let us compile and test our new module. We have to start from “make defconfig” so that our changes to configuration files take effect.

$ export CROSS_COMPILE=arm-none-linux-gnueabi-
$ export ARCH=arm
$ make clean
$ make mini2440_defconfig
$ make menuconfig

Enable our newly added module from Device Drivers —> Misc devices —> hello world module. Now start compiling modules.

$ make modules
$ make modules_install INSTALL_MOD_PATH=$ROOTFS

Here $ROOTFS is the target file system. Now to test our new module,

$ modprobe hello_world
Hello World, sample module is installed !
$ rmmod hello_world
Good-bye, sample module was removed!

U-boot_config_src

Das U-Boot
==> Unpacking the Source Code:
If you used GIT to get a copy of the U-Boot sources, then you can skip this next step since you already have
an unpacked directory tree. If you downloaded a compressed tarball from the DENX FTP server, you can
unpack it as follows:
$ cd /opt/eldk/usr/src
$ wget ftp://ftp.denx.de/pub/u-boot/u-boot-1.3.2.tar.bz2
$ rm -f u-boot
$ bunzip2 < u-boot-1.3.2.tar.bz2 | tar xf - $ ln -s u-boot-1.3.2 u-boot $ cd u-boot ==> Configuration:
After changing to the directory with the U-Boot source code you should make sure that there are no build
results from any previous configurations left:
$ make distclean
The following (model) command configures U-Boot for the canyonlands board:
$ make canyonlands_config
And finally we can compile the tools and U-Boot itself:
$ make all
By default the build is performed locally and the objects are saved in the source directory. One of the two
methods can be used to change this behaviour and build U-Boot to some external directory:
1. Add O= to the make command line invocations:
make O=/tmp/build distclean
make O=/tmp/build canyonlands_config
make O=/tmp/build all
Note that if the 'O=output/dir' option is used then it must be used for all invocations of make.
2. Set environment variable BUILD_DIR to point to the desired location:
export BUILD_DIR=/tmp/build
make distclean
make canyonlands_config
make all
Note that the command line "O=" setting overrides the BUILD_DIR environment variable.

Kernel Configuration and Compilation

Embedded Linux Configuration
===>> Download and Unpack the Linux Kernel Sources


>> To be sure that no intermediate results of previous builds are left in your Linux kernel source tree you can
clean it up as follows:

-- bash# make mrproper

>> The following command selects a standard configuration for the canyonlands board that has been extensively
tested. It is recommended to use this as a starting point for other, customized configurations:

-- bash# make ARCH=powerpc CROSS_COMPILE=ppc_4xx- canyonlands_defconfig

Note: The name of this default configuration file is arch/powerpc/configs/XXX_defconfig . By
listing the contents of the arch/powerpc/configs/ directory you can easily find out which other default
configurations are available.

>> If you don't want to change the default configuration you can now continue to use it to build a kernel image:

-- bash# make ARCH=powerpc CROSS_COMPILE=ppc_4xx- uImage
-- bash# cp arch/powerpc/boot/uImage /tftpboot

>> Otherwise you can modify the kernel configuration as follows:
-- bash$ make ARCH=powerpc CROSS_COMPILE=ppc_4xx- config
OR
-- bash$ make ARCH=powerpc CROSS_COMPILE=ppc_4xx- menuconfig
Note: Because of problems (especially with some older Linux kernel versions) the use of "make xconfig"
is not recommended.

-- bash$ make ARCH=powerpc CROSS_COMPILE=ppc_4xx- uImage

The make target uImage uses the tool mkimage (from the U-Boot package) to create a Linux kernel image in
arch/powerpc/boot/uImage


which is immediately usable for download and booting with U-Boot.

In case you configured modules you will also need to compile the modules:

-- make ARCH=powerpc CROSS_COMPILE=ppc_4xx- modules

add install the modules (make sure to pass the correct root path for module installation):

-- bash$ make ARCH=powerpc CROSS_COMPILE=ppc_4xx- INSTALL_MOD_PATH=/opt/eldk-4.2/ppc_4xx modules_ins

Friday, June 24, 2011

How to remove duplicate entries in a file without sorting

GNU awk is a programming language that is designed for processing text-based data, either in files or data streams, and was created in the 1970s at Bell Labs.

To remove duplicate entries without sorting them, enter:
view source
print?
1 gawk '!x[$0]++' filename

Key board shortcuts for linux

In this article I will show you some keyboard shortcuts and other command line tricks to make entering commands easier and faster. Learning them can make your life a lot easier!

Here are some keyboard shortcuts you can use within terminal:

Alt-r Undo all changes to the line.
Alt-Ctrl-e Expand command line.
Alt-p Non-incremental reverse search of history.
Alt-] x Moves the cursor forward to the next occurrence of x.
Alt-Ctrl-] x Moves the cursor backwards to the previous occurrence of x.
Ctrl-a Move to the start of the line.
Ctrl-e Move to the end of the line.
Ctrl-u Delete from the cursor to the beginning of the line.
Ctrl-k Delete from the cursor to the end of the line.
Ctrl-w Delete from the cursor to the start of the word.
Ctrl-y Pastes text from the clipboard.
Ctrl-l Clear the screen leaving the current line at the top of the screen.
Ctrl-x Ctrl-u Undo the last changes. Ctrl-_
Ctrl-r Incremental reverse search of history.
!! Execute last command in history
!abc Execute last command in history beginning with abc
!n Execute nth command in history
^abc^xyz Replace first occurrence of abc with xyz in last command and execute it

Recovery of the root pass word in BSD

The password cannot be recovered. The following procedure will allow you to change the root password.

Do the following:

- when the following boot message appears

Hit [ENTER] to boot immediately, or any other key for command prompt.
Booting [kernel] in 10 seconds...

hit a key, just pick one EXCEPT the ENTER key. You'll get a prompt like:

disk1s1a:>

- type the following commands:

disk1s1a:>unload all
disk1s1a:>load kernel
disk1s1a:>boot -s

The boot process will now start, just wait until it asks you for a shell. Just hit
ENTER and 'sh' will be used as shell.

If you type 'mount' you will see that only your root partition ( / ) is mounted, you will have to mount 'usr' partition as well.

#mount /dev/ad0s1f /usr

Now you have to mount root partition as read-write enabled. Use the following command:

#mount -u /

The root partition should now be mounted read-write. Now you can use 'passwd' program to
change the root password.

#passwd

That's all, reboot the system and login with the new password.

vi editor short cuts

vi is a family of screen-oriented text editors which share certain characteristics, such as methods of invocation from the operating system command interpreter, and characteristic user interface features. The portable subset of the behavior of vi programs, and the ex editor language supported within these programs, is described by the Single Unix Specification and POSIX.

vi operates in either insert mode (where typed text becomes part of the document) or normal mode (where keystrokes are interpreted as commands). Typing "i" while in normal mode switches the editor to insert mode. Typing "i" again at this point places an "i" in the document. From insert mode, pressing the escape key switches the editor back to normal mode.

vi basic commands

:set ic
ignore case differences when searching.
:set ai
set automatic indent.
:set sm
show matching ( or { with ) or } in insert mode.
:set nu
show line numbers.

down-arrow up-arrow
move down/up 1 line.
right-arrow left-arrow
move right/left 1 character column.
0 $
go to 1st/last column of current line.
return
go down to 1st printable character of next line.
nw nb
move right/left n words (1 word if n omitted).
ng
go to line n (end of file if n omitted).
ctrl-f ctrl-b
page forward/backward 1 screen.
ctrl-d ctrl-u
page forward/backward half a screen.
[[ ]]
go to beginning of current/next c function.

/expressionreturn
search forwards for expression.
?expressionreturn
search backwards for expression.
n n
repeat last / or ? command in same/reverse direction.

ytarget
copy (yank) text up to target to buffer.
y
copy current line to buffer.

itextesc
insert text before cursor.
otextesc
open new line below cursor and insert text.
r
replace character under cursor with next typed.
rtextesc
replace text.

backspace
in insert mode, delete character before cursor.
x x
delete character under/before cursor.
nx
delete n characters under and to right of cursor.
nx
delete n characters before cursor.
dd
delete current line.
ndd
delete n lines.
d
delete from cursor to end of line.

p p
put back yanked or deleted text below/above current line.
j
join current and next lines.
:m,n s/old/new/gc
global replace (g=every occurrence on line, c=prompt);

m=.
means from current position, n=$ means to eof.
u
undo last change.

:q
quit, provided no changes were made.
:q!
quit without saving.
:w
save (write) changes.
:m,n w file
save lines m through n (default=all) to file.
: x
save changes and quit.

How To change Ethernet network card speed and duplex settings in Linux

This tutorial will explain how to change network card speed and duplex settings in linux. It's working with any linux distributions like Fedora, CentOS, Debian, Ubuntu, etc.

ethtool is an Linux/Unix command allowing to modify the NIC parameters. ethtool can be used to query and change settings such as speed, negotiation and checksum offload on many network devices, especially Ethernet devices.

1. Install ethtool

Install ethtool in Fedora and CentOS:

# yum install ethtool

Install ethtool in Debian:

# apt-get install ethtool

Install ethtool in Ubuntu:

# sudo apt-get install ethtool

2. Using ethtool

You can check the current Ethernet network card speed and duplex settings using the following command

# ethtool eth0

or

# sudo ethtool eth0

if you are using Ubuntu.

eth0 is the Ethernet network card interface

Output:
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: MII
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes

Turn off Auto-Negotiate feature using the following command

# ethtool -s eth0 autoneg off

or

# sudo ethtool -s eth0 autoneg off

if you are using Ubuntu.

3. ethtool Syntax

# ethtool -s eth0 speed SPEED duplex DUPLEX

Examples:

This example will show you how to setup your NIC speed 10 and half duplex
# ethtool -s eth0 speed 10 duplex half

This example will show you how to setup your NIC speed 100 and full duplex
# ethtool -s eth0 speed 100 duplex full

Monitoring Network traffic by process

NetHogs is a small network monitoring tool. Instead of breaking the traffic down per protocol or per subnet, like most tools do, it groups bandwidth by process. NetHogs does not rely on a special kernel module to be loaded. If there's suddenly a lot of network traffic, you can fire up NetHogs and immediately see which process is causing this. This makes it easy to indentify programs that have gone wild and are suddenly taking up your bandwidth.

To install NetHogs under CentOS, Fedora, RHEL, enter:
view source
print?
1 yum install nethogs

To install NetHogs under Debian and Ubuntu, enter:
view source
print?
1 apt-get install nethogs

The default network interface to monitor is eth0. If you wish to use other device, simply type the argument after nethog, open the terminal and run the following command:
view source
print?
1 nethogs eth0

usage: nethogs [-V] [-b] [-d seconds] [-t] [-p] [device [device [device ...]]]
-V : prints version.
-d : delay for update refresh rate in seconds. default is 1.
-t : tracemode.
-b : bughunt mode - implies tracemode.
-p : sniff in promiscious mode (not recommended).
device : device(s) to monitor. default is eth0

When nethogs is running, press:
q: quit
m: switch between total and kb/s modeNetHogs is a small network monitoring tool. Instead of breaking the traffic down per protocol or per subnet, like most tools do, it groups bandwidth by process. NetHogs does not rely on a special kernel module to be loaded. If there's suddenly a lot of network traffic, you can fire up NetHogs and immediately see which process is causing this. This makes it easy to indentify programs that have gone wild and are suddenly taking up your bandwidth.

To install NetHogs under CentOS, Fedora, RHEL, enter:
view source
print?
1 yum install nethogs

To install NetHogs under Debian and Ubuntu, enter:
view source
print?
1 apt-get install nethogs

The default network interface to monitor is eth0. If you wish to use other device, simply type the argument after nethog, open the terminal and run the following command:
view source
print?
1 nethogs eth0

usage: nethogs [-V] [-b] [-d seconds] [-t] [-p] [device [device [device ...]]]
-V : prints version.
-d : delay for update refresh rate in seconds. default is 1.
-t : tracemode.
-b : bughunt mode - implies tracemode.
-p : sniff in promiscious mode (not recommended).
device : device(s) to monitor. default is eth0

When nethogs is running, press:
q: quit
m: switch between total and kb/s mode

How to change MAC Address in Linux & BSD

Media Access Control address (MAC address) is a unique identifier assigned to most network adapters or network interface cards (NICs) by the manufacturer for identification, and used in the Media Access Control protocol sub-layer. If assigned by the manufacturer, a MAC address usually encodes the manufacturer's registered identification number. It may also be known as an Ethernet Hardware Address (EHA), hardware address, adapter address, or physical address.

1. Change MAC Address in Linux ( CentOS, Debian, Fedora, RHEL, Slackware, SuSE, Ubuntu )

# ifconfig [interface name] down
# ifconfig [interface name] hw ether [new MAC address]
# ifconfig [interface name] up

Example:

# ifconfig eth0 down
# ifconfig eth0 hw ether 1A:2B:3C:4D:5E:6F
# ifconfig eth0 up

2. Change MAC Address in FreeBSD

# ifconfig [interface name] down
# ifconfig [interface name] ether [new MAC address]
# ifconfig [interface name] up

Example:

# ifconfig xl0 down
# ifconfig xl0 ether 1A:2B:3C:4D:5E:6F
# ifconfig xl0 up

3. Change MAC address in HP-UX

Under HP-UX, you can change the MAC address in SAM by selecting Networking and Communications, then selecting the interface, then Action, Modify, Advanced Options. HP-UX refers to the MAC address as the "station address".

4. Change MAC address in IBM AIX

Set the physical MAC address for NIC.
# chdev -l ent0 -a
alt_addr= -P

Use the new MAC address.
use_alt_addr=yes -P

# reboot

5. Change MAC address in Mac OS X

Since Mac OS X 10.4.x (Darwin 8.x) onwards, the MAC address of wired Ethernet interface can be altered in Apple Mac OS X in a fashion similar to the FreeBSD/OpenBSD method.

# sudo ifconfig [interface name] down
# sudo ifconfig [interface name] ether 1A:2B:3C:4D:5E:6F
# sudo ifconfig [interface name] up

or

# sudo ifconfig [interface name] lladdr aa:bb:cc:dd:ee:ff (for Mac OS X 10.5 Leopard)

Example:

# sudo ifconfig en0 down
# sudo ifconfig en0 ether 1A:2B:3C:4D:5E:6F
# sudo ifconfig en0 up

or

# sudo ifconfig en0 down
# sudo ifconfig en0 lladdr 1A:2B:3C:4D:5E:6F
# sudo ifconfig en0 up

6. Change MAC address in OpenBSD

# ifconfig [interface name] down
# ifconfig [interface name] lladdr [new MAC address]
# ifconfig [interface name] up

Example:

# ifconfig bge1 down
# ifconfig bge1 lladdr 1A:2B:3C:4D:5E:6F
# ifconfig bge1 up

7. Change MAC address in Solaris

# ifconfig [interface name] down
# ifconfig [interface name] ether [new MAC address]
# ifconfig [interface name] up

Example:

# ifconfig hme0 down
# ifconfig hme0 ether 1A:2B:3C:4D:5E:6F
# ifconfig hme0 up

Monday, June 6, 2011

Converting fork() and exec() Usage to spawn()Converting fork() and exec() Usage to spawn()

The spawn() function provides a fast, low-overhead mechanism for creating a new POSIX process to run a new program. This is the typical usage of the POSIX.1 fork() function. OpenExtensions includes the POSIX.1d spawn() definition, which was included in the standard to handle the following operations in one function:

1. Create a new process.
2. Perform operations typically done in the new process to prepare to run a new program. This includes file descriptor mapping, changing process group membership, job control, and altering the signal handling environment.
3. Invoke the new program through exec().

To convert an application from using fork() and exec() to using spawn(), the following steps should be followed:

1. Replace the call to fork() with a call to spawn(), using the program name and program parameters from the exec() call.
2. Delete the call to exec().
3. Determine the other parameters to spawn() by examining the calls made between the fork() and the subsequent exec() to change the environment for the new program:
* Calls to dup2() should be replaced by entries in the file descriptor array.
* The mask value in any sigmask() calls should be used in the signal mask member of the inheritance structure.
* Signals whose actions are defaulted through sigaction() calls should be included in the sigdefault member of the inheritance structure.
* A call to setpgid() should be replaced by an entry in the process group member of the inheritance structure.
* A call to tcsetpgrp should be replaced by an entry in the inheritance structure.

Saturday, June 4, 2011

mmap()

Let’s consider a simple example program that uses mmap() to print a file chosen by the user to standard out:

#include
#include
#include
#include
#include
#include

int main (int argc, char *argv[])
{
struct stat sb;
off_t len;
char *p;
int fd;

if (argc < 2) { fprintf (stderr, "usage: %s \n", argv[0]);
return 1;
}

fd = open (argv[1], O_RDONLY);
if (fd == -1) {
perror ("open");
return 1;
}

if (fstat (fd, &sb) == -1) {
perror ("fstat");
return 1;
}

if (!S_ISREG (sb.st_mode)) {
fprintf (stderr, "%s is not a file\n", argv[1]);
return 1;
}

p = mmap (0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0);
if (p == MAP_FAILED) {
perror ("mmap");
return 1;
}

if (close (fd) == -1) {
perror ("close");
return 1;
}

for (len = 0; len < sb.st_size; len++)
putchar (p[len]);

if (munmap (p, sb.st_size) == -1) {
perror ("munmap");
return 1;
}

return 0;
}

The only unfamiliar system call in this example should befstat(), which we will cover in Chapter 7. All you need to know at this point is that fstat()returns information about a given file. TheS_ISREG()macro can check some of this information, so that we can ensure that the given file is a regular file (as opposed to a device file or a directory) before we map it. The behavior of nonregular files when mapped depends on the backing device. Some device files are mmap-able; other nonregular files are not mmap-able, and will seterrnotoEACCESS.

The rest of the example should be straightforward. The program is passed a filename as an argument. It opens the file, ensures it is a regular file, maps it, closes it, prints the file byte-by-byte to standard out, and then unmaps the file from memory.

Advantages of mmap()

Manipulating files via mmap() has a handful of advantages over the standard read() andwrite()system calls. Among them are:

1.
Reading from and writing to a memory-mapped file avoids the extraneous copy that occurs when using theread()orwrite()system calls, where the data must be copied to and from a user-space buffer.
2.
Aside from any potential page faults, reading from and writing to a memory-mapped file does not incur any system call or context switch overhead. It is as simple as accessing memory.
3.
When multiple processes map the same object into memory, the data is shared among all the processes. Read-only and shared writable mappings are shared in their entirety; private writable mappings have their not-yet-COW (copy-on-write) pages shared.
4.
Seeking around the mapping involves trivial pointer manipulations. There is no need for thelseek()system call.

For these reasons,mmap()is a smart choice for many applications.

Disadvantages of mmap()

There are a few points to keep in mind when using mmap():

1. Memory mappings are always an integer number of pages in size. Thus, the difference between the size of the backing file and an integer number of pages is “wasted” as slack space. For small files, a significant percentage of the mapping may be wasted. For example, with 4 KB pages, a 7 byte mapping wastes 4,089 bytes.
2. The memory mappings must fit into the process’ address space. With a 32-bit address space, a very large number of various-sized mappings can result in fragmentation of the address space, making it hard to find large free contiguous regions. This problem, of course, is much less apparent with a 64-bit address space.
3. There is overhead in creating and maintaining the memory mappings and associated data structures inside the kernel. This overhead is generally obviated by the elimination of the double copy mentioned in the previous section, particularly for larger and frequently accessed files.

For these reasons, the benefits ofmmap()are most greatly realized when the mapped file is large (and thus any wasted space is a small percentage of the total mapping), or when the total size of the mapped file is evenly divisible by the page size (and thus there is no wasted space).

Monday, March 7, 2011

Packing structures \ enums

There are various ways in which this can be done.

1. using #pragma pack()

#pragma pack(2)
typedef struct
{
char c;
int i;
} DataType;
#pragma pack()


This would pack the structure on a 2 byte boundary. If a tight packing is required use #pragma pack(1) instead. Compile it normally using gcc.

2. Using -fpack-struct
Instead of using the #pragma, we can directly use the compiler flags instead.

$gcc -Wall -fpack-struct -fshort-enums test.c -o test

This would pack all the structs to the 1 byte boundary and consider shorts for enums instead of integers

2. Using __attribute__ ((__packed__))


typedef struct
{
char c;
int i;
} __attribute__ ((__packed__))DataType;

.
Compile the code normally.

We can also do it this way


typedef struct
{
char c __attribute__ ((__packed__));
int i1 __attribute__ ((__packed__));
int i2;
} DataType;

your own system call in 5 easy steps

You *might* want to write your system call for various reasons

Assuming the path to your kernel source is "L". Create a new folder L/mysyscall. Inside the folder create the source file mysyscall.c and a Makefile

Step 1. Changing the System Table
L/arch/x86/kernel/syscall_table_32.S

Add your system call at the end of the file.

.long sys_new_system_call

Step 2. Changing the unistd.h
L/linux/include/asm-x86/unistd_32.h

Add your system call at the end of the existing list and append the next number

#define __NR_new_system_call XXX

Where XXX is the existing system call number plus 1. Also update the total system calls (as you just added another)
#define __NR_syscalls XXY
Where XXY is XXX+1

Step 3: Changing syscalls.h
L/include/linux/syscalls.h

Add the declaration of your system call at the end.
asmlinkage long new_system_call (whatever params you want to pass)

Step 4: Changing the kernel Makefile
Add the new folder to the kernel compile

core-y += /kernel /blah /blah /blah /mysyscall

Step 5: Write your system call

Write whatever crap you want to write inside the mysyscall.c file

asmlinkage long new_system_call (whatever params you want to pass)
{
// whatever you want to do
}

Change the makefile as well and add the following line

obj-y := mysyscall.o

Compile your kernel and test the system call from a user level program. You can create a header file that the user space program can use.

/* header.h */
#include < linux/unistd.h >
#define __NR_new_system_call XXX

/* if you system call returns int and takes no parameter
* use this macro
*/
_syscall0(int,new_system_call)

/* Otherwise, depending on the number of parameters
* being passed use the _syscallN macro, N being the no
* of params, like
_syscall1(int, new_system_call, int)
*/

Last thing to do is to test the code:

/* test client */
#include "header.h"

int main (void)
{
printf ("System call returned %d \n", new_system_call());
return 1;
}

NOTE
Starting around kernel 2.6.18, the _syscallXX macros were removed from header files supplied to user space. Instead we need to use syscall() function.

printf ("System call returned %d \n", syscall (__NR_new_system_call, params_if_any));

or, make the following changes in the header.h

/* header.h */
#include < linux/unistd.h >
#include < sys/syscall.h >
#define __NR_new_system_call XXX

long new_system_call (params_if_any)
{
return syscall (__NR_new_system_call, params_if_any);
}

Shared libraries

gcc -shared -Wl,-soname,your_soname -o library_name file_list library_list

Which means, if you have a.c and b.c

gcc -fPIC -g -c -Wall a.c
gcc -fPIC -g -c -Wall b.c
gcc -shared -Wl,-soname,libmystuff.so.1 \
-o libmystuff.so.1.0.1 a.o b.o -lc

Then create the symbolic links to libmystuff.so and libmystuff.so.1 from libmystuff.so.1.0.1 and don't forget to add the libraries to the standard path /usr/local/lib OR add the path to LD_LIBRARY_PATH before executing.

Thursday, March 3, 2011

POSIX Semaphores

Semaphores

POSIX 1003.1b semaphores provide an efficient form of interprocess communication. Cooperating processes can use semaphores to synchronize access to resources, most commonly, shared memory. Semaphores can also protect the following resources available to multiple processes from uncontrolled access:

* Global variables, such as file variables, pointers, counters, and data structures. Protecting these variables prevents simultaneous access by more than one process, such as reading information as it is being written by another process.
* Hardware resources, such as disk and tape drives. Hardware resources require controlled access because simultaneous access can result in corrupted data.

This chapter includes the following sections:

* Overview of Semaphores
* The Semaphore Interface
* Semaphore Example

Overview of Semaphores

Semaphores are used to control access to shared resources by processes. Counting semaphores have a positive integral value representing the number of processes that can concurrently lock the semaphore.

There are named and unnamed semaphores. Named semaphores provide access to a resource between multiple processes. Unnamed semaphores provide multiple accesses to a resource within a single process or between related processes. Some semaphore functions are specifically designed to perform operations on named or unnamed semaphores.

The semaphore lock operation checks to see if the resource is available or is locked by another process. If the semaphore’s value is a positive number, the lock is made, the semaphore value is decremented, and the process continues execution. If the semaphore’s value is zero or a negative number, the process requesting the lock waits (is blocked) until another process unlocks the resource. Several processes may be blocked waiting for a resource to become available.

The semaphore unlock operation increments the semaphore value to indicate that the resource is not locked. A waiting process, if there is one, is unblocked and it accesses the resource. Each semaphore keeps count of the number of processes waiting for access to the resource.

Semaphores are global entities and are not associated with any particular process. In this sense, semaphores have no owners making it impossible to track semaphore ownership for any purpose, for example, error recovery.

Semaphore protection works only if all the processes using the shared resource cooperate by waiting for the semaphore when it is unavailable and incrementing the semaphore value when relinquishing the resource. Since semaphores lack owners, there is no way to determine whether one of the cooperating processes has become uncooperative. Applications using semaphores must carefully detail cooperative tasks. All of the processes that share a resource must agree on which semaphore controls the resource.

POSIX 1003.1b semaphores are persistent. The value of the individual semaphore is preserved after the semaphore is no longer open. For example, a semaphore may have a value of 3 when the last process using the semaphore closes it. The next time a process opens that semaphore, it will find the semaphore has a value of 3. For this reason, cleanup operations are advised when using semaphores.

Note that because semaphores are persistent, you should call the sem_unlink function after a system reboot. After calling sem_unlink, you should call the sem_open function to establish new semaphores.

The semaphore descriptor is inherited across a fork. A parent process can create a semaphore, open it, and fork. The child process does not need to open the semaphore and can close the semaphore if the application is finished with it.

The Semaphore Interface

The following functions allow you to create and control P1003.1b semaphores:
Function Description
sem_close Deallocates the specified named semaphore
sem_destroy Destroys an unnamed semaphore
sem_getvalue Gets the value of a specified semaphore
sem_init Initializes an unnamed semaphore
sem_open Opens/creates a named semaphore for use by a process
sem_post Unlocks a locked semaphore
sem_trywait Performs a semaphore lock on a semaphore only if it can lock the semaphore without waiting for another process to unlock it
sem_unlink Removes a specified named semaphore
sem_wait Performs a semaphore lock on a semaphore

You create an unnamed semaphore with a call to the sem_init function, which initializes a counting semaphore with a specific value. To create a named semaphore, call sem_open with the O_CREAT flag specified. The sem_open function establishes a connection between the named semaphore and a process.

Semaphore locking and unlocking operations are accomplished with calls to the sem_wait, sem_trywait, and sem_post functions. You use these functions for named and unnamed semaphores. To retrieve the value of a counting semaphore, use the sem_getvalue function.

When the application is finished with an unnamed semaphore, the semaphore name is destroyed with a call to sem_destroy. To deallocate a named semaphore, call the sem_close function. The sem_unlink function removes a named semaphore. The semaphore is removed only when all processes using the semaphore have deallocated it using the sem_close function.
Creating and Opening a Semaphore

A call to the sem_init function creates an unnamed counting semaphore with a specific value. If you specify a non-zero value for the pshared argument, the semaphore can be shared between processes. If you specify the value zero, the semaphore can be shared among threads of the same process.

The sem_open function establishes a connection between a named semaphore and the calling process. Two flags control whether the semaphore is created or only accessed by the call. Set the O_CREAT flag to create a semaphore if it does not already exist. Set the O_EXCL flag along with the O_CREAT flag to indicate that the call to sem_open should fail if the semaphore already exists.

Subsequent to creating a semaphore with either sem_init or sem_open, the calling process can reference the semaphore by using the semaphore descriptor address returned from the call. The semaphore is available in subsequent calls to the sem_wait, sem_trywait, and sem_post functions, which control access to the shared resource. You can also retrieve the semaphore value by calls to sem_getvalue.

If your application consists of multiple processes that will use semaphores to synchronize access to a shared resource, each of these processes must first open the semaphore by a call to the sem_open function. After the initial call to the sem_init or sem_open function to establish the semaphore, each cooperating function must also call the sem_open function. If all cooperating processes are in the same working directory, just the name is sufficient. If the processes are contained in different working directories, the full pathname must be used. It is strongly recommended that the full pathname be used, such as /tmp/mysem1. The directory must exist for the call to succeed.

API :

An API is a functional interface supplied by the operating system or a separately orderable licensed program that allows an application program written in a high-level language to use specific data or functions of the operating system or the licensed program.

Some APIs provide the same functions as control language (CL) commands and output file support. Some APIs provide functions that CL commands do not. Most APIs work more quickly and use less system overhead than the CL commands.

API use has the following advantages:

* APIs provide better performance when getting system information or when using system functions that are provided by CL commands or output file support.
* APIs provide system information and functions that are not available through CL commands.
* You can use calls from high-level languages to APIs.
* You can access system functions at a lower level than what was initially provided on the system.
* Data is often easier to work with when returned by an API.

Friday, February 18, 2011

Zenith Infotech Tel Round :
=========================
1. Write a program to set a particular bit in a given number.
2. Program to convert little endian to big endian
3. Difference b/w MACRO and INLNE
4. Disadvantage of INLINE functions.
5. Difference b/w function and INLINE function.
6. What is function pointer
7. Write a syntax for array of 10 pointer, which has taken interget, char, float as arguments and returns interger type.
8. Diff b/w Semaphores and Spin locks, when to use semaphore and when to use spin locks, what is interrupt.
9. what is the diff b/w soft IRQ and work queue.
10. What is major number, how we get major number.
11. How will the kernel detects the devices.
12. How to debug the USB driver.
13. How to find and debug the memory leaks in the kernel.
14. I what to disable a particular interrupt, which API have to use.
15. What is logical address and linear address.
16. What is the purpose of nmap.
17. Communication b/w client and server in TCP sockets
18. Socket system call parameters.
19. Is it possible to use Spin locks for uni processor systems.