Grep with context

Use the -Cx switch where “x” is the number of lines of surrounding context to show. For example, to show four lines of context:

grep -C4 “virtual” /usr/local/bin/httpd

Processor optimizations before "./configure"

Boost the speed of your compiled source code by prefixing your CPU architecture flags in front of the ./configure command (when building from source). Lets say you had an Opteron-based server:

CFLAGS=”-march=opteron -O3 -m64 -pipe -mtune=opteron -fomit-frame-pointer” ./configure

Disable VNC's built-in webserver

I never use this feature, and just figured out how to turn it off:

1) Open the VNC server perl script.
vi /usr/bin/vncserver

2) Scroll down to approximately line 140. Find either of the following lines, and comment it out:
# $cmd .= ” -httpd $vncJavaFiles” if ($vncJavaFiles);
# $cmd .= ” -httpd $vncClasses”;

3) Restart the VNC service.
service vncserver restart

4) Verify that port 5800 is no longer listening with the command: “netstat -pantl|grep LISTEN”

Jabber2 and Active Directory

The Jabber documentation tells you to connect to Active Directory (AD) on the common LDAP port of 389. While this setup s fine for minimal AD implementations, when you have a complicated AD forest with multiple trees, you need to point Jabber authentication to the Global Catalog instead.

More specifically, in the c2s.xml file, you should have an entry like this:

Linux NIC teaming w/o additional software

After unsuccessfully trying several 3rd-party drivers to provide load-balancing and failover for my Linux servers with dual NIC cards, I stumbled upon the fact that Linux has an integrated bonding driver called, appropriately, “bonding.” Here’s how you get it working (assuming you have at least two interfaces):

1) Add the following lines in /etc/modules.conf:

alias bond0 bonding
options bond0 primary=eth0 mode=1 miimon=200
alias eth0 bcm5700
alias eth1 bcm5700

2) Create a file named /etc/sysconfig/network-scripts/ifcfg-bond0 containing the following (change the network info for your particular environment):


3) Create a file named /etc/sysconfig/network-scripts/ifcfg-eth0 containing the following:


4) Create a file named /etc/sysconfig/network-scripts/ifcfg-eth1 containing the following:


5) Reboot the server.

6) Verify team configuration by viewing /proc/net/bond0/info or by using the “ifconfig” command.. The bond MAC address will be the taken from its first slave device.

You can set up your bond interface according to your needs. Changing one parameters (mode=X) in /etc/modprobe.conf allows you to select what you need. Here are the choices:

mode=0: Round-robin load balancing
mode=1: Active backup. Only one slave is active. A different slave becomes active if the active slave fails.
mode=2: Load balancing and fault tolerance.
mode=3: Broadcast. Transmits everything on all slave interfaces. Fault tolerant.
mode=4: Dynamic link aggregation. Bonding for combining bandwith into a single connection.
mode=5: Adaptive transmit load-balancing.
mode=6: Adaptive transmit and receive load balancing

Automatic IRC login in GAIM

1) Setup your IRC account.
2) Add NickServ to your buddy list.
3) Place a buddy pounce on NickServ:
a) In Gaim, go to Tools > Buddy Pounce > New Buddy Pounce
b) Add NickServ as the Buddy Name
c) Select your IRC account
d) Pounce When: Sign on
e) Send a message (the message is simply “identify password [password]”)
f) Check “Save this pounce after activation
g) Click “Save”

mod_disk_cache vs mod_mem_cache

Apache is faster when using disk cache than memory cache. This seems counter-intuitive, as I expected that delivering content from RAM would be significantly faster than accessing a file off a slower disk drive. However, after a conversation with “chipig” in the Apache IRC group, I have seen the light.

If you use mod_mem_cache, when Apache recieves a request from a browser for a file, that file’s contents must first be read into memory, and then sent out to the client via a communcations endpoint. This process of copying the data into RAM, and then into a kernel buffer to send it is wasteful, and time-consuming. The server really doesn’t want the contents of the file-it merely wants to send the contents to the browser.

If you use the mod_disk_cache module, Linux can use the sendfile () API. Sendfile eliminates the necessity that the server read a file before sending it. With sendfile, the server specifies the file to send and the communications endpoint in the sendfile API; then the OS reads and sends the file. Thus, the server doesn’t have to issue a read API or dedicate memory for the file contents, and the OS can use its file system cache to efficiently cache files that clients request.

And because the copying is done in the kernel, these disk accesses will be buffered by the kernel. This serves to increase cache speed even higher.

Creating logical drives on HP servers at the CLI

Here’s is how you create new logical drives on an HP server from the Linux command-line. The “Hpacucli” tool is installed as part of the HP PSP.
1) Show all controllers on the server:
$ hpacucli controller all show

Smart Array 6i in Slot 0

2) Show the status of the controller:
$ hpacucli controller all show status

Smart Array 6i in Slot 0
Controller Status: OK
Cache Status: OK

3) Show all physical drives:
$ hpacucli ctrl slot=0 pd all show

Smart Array 6i in Slot 0

array A
physicaldrive 2:0 (port 2:id 0 , Parallel SCSI, 36.4 GB, OK)
physicaldrive 2:1 (port 2:id 1 , Parallel SCSI, 36.4 GB, OK)

physicaldrive 2:2 (port 2:id 2 , Parallel SCSI, 300 GB, OK)
physicaldrive 2:3 (port 2:id 3 , Parallel SCSI, 300 GB, OK)

4) Create a new RAID1 logical drive on the unassigned physical drives.
$ hpacucli ctrl slot=0 create type=ld drives=2:2,2:3 raid=1+0

[Note: All usable drive space will be used by default.]

5) Show the status of all logical drives:
$ hpacucli ctrl slot=0 logicaldrive all show status

logicaldrive 1 (33.9 GB, 1+0): Ok
logicaldrive 2 (279 GB, 1+0): Ok

6) Show all defined arrays:
$ hpacucli ctrl slot=0 logicaldrive all show

Smart Array 6i in Slot 0

array A
logicaldrive 1 (33.9 GB, RAID 1+0, OK)

array B
logicaldrive 2 (279 GB, RAID 1+0, OK)