Monday, December 20, 2010

NCAR-2010, Hyderabad by FSMI

FOSS has been the mantra.


Whilst a lot of emphasis was laid on the inclusion of FOSS into Academia and Research. An evident lag did surface up in the audience present at the National Convention for Academics and Research in FOSS, Hyderabad-2010. The important aspect intended by the Convention was not to empower the audience in FOSS, which is unlikely to be accomplished within a couple of days, but to instill the idea of enabling themselves to pursue and propagate FOSS.
This being the primary objective of the Convention, numerous talks and discussions were organized by Free Software Movement-India, hosted by Swecha


After having missed the first day of the Convention, which had the highlight inaugural speech by the Former President of India, the charismatic Dr. APJ Abdul Kalam, I had to rely on everbody else's reactions to know that day's proceedings.


When inquired about Dr.Kalam's views, I realized that his talk had created a wave of euphoria amongst the Free and Open Source enthusiasts, as he had endorsed the views of going Open Source, while insisting the adpotion of FOSS in India. But, his stand on Intellectual Property Rights and Patents, seemed to contradicit his recent views presented in an appearance at the Private company which employs me. This did come as a surprise that his stand has been this volatile. Nevertheless, his stature and the support to FOSS he rendered defintiely adds value to the Free Software Movement.

The  usual dilemma of the parallel sessions in Conventions, yet again haunted me. As a result, I was able to attend only half the number of sessions which were arranged.

Talking about the sessions, the first session I could participate in was about GNU's statistical plotter simply called 'R' and its rich feature and application set. I might be able to include it in some of the prospective work I would be doing,and the session did give me some idea of using it. Another tool, which I got acquainted to was GRDSS based on GRASS, now maintained by IISc. This is another tool which I have been looking for. Now that the maintainers are at IISc, I am hoping for some breakthroughs with my work related to Image Processing.


One of the best sessions organized was about "FOSS in Teaching", which emphasized on the disconnect between Academia and Industry.  Four speakers tackled this topic in different perspectives, ultimately emphasizing on the role and responsibility of FOSS in Teaching, and FOSS teaching in itself.

Later, the end of the second day witnessed a conforming panel discussion, which strongly brainstormed on the ways and methods through which FOSS can be spread, more so, converging to the points I had made in the minDebConf : Deficits of Propagation, Percolation and Perpetuation of Free and Open Source Software. A correlation to Indian culture which has been emphasizing on the principle of 'Sharing and Growing' was accentuated, relating it to the driving motto of the Free Software Movement.

On the third and final day of the Convention, I took part in three excellent sessions.
First session was about Contributing to FOSS. The speakers outlined the role of Consumer-producer model in the FOSS. Tools like 'git'and other basics were presented. 'git' did give the audience an idea into version controls of packages and contribution mechanism.


Second session was the crucial one: Open Standard Policy and e-Governance. This session had speakers from the National Informatics Centre, and other board members on the drafting body of the Open Standards Policy. The policy by itslef was discussed, debated and analyzed. The positive impacts, as well as the uncertainties in the policy were openly debated. Yet, again the unanimous struggle by the FOSS entities across India have accomplished in getting this landmark policy made. Now, it is also our responsibility to see it is well implemented.



Of the technical sessions, personally the best was "Scientific Visualization tools", where the developers of Matplotlib and Mayavi2 demonstrated these amazing tools. This is one other reason, I am convinced that Python is the language I definitely am going to learn. These tools have excellent abilities to facilitate research, and not as a surprise NASA,Indian Meteorlogical Dept use them.


In the end, when all three days are cumulatively analyzed, there were a few usual flaws which are inevitable in any major event. But, ultimately I am left with an increased awe and amplified urge to pursue and propagate more of Free and Open Source Software.

Saturday, October 30, 2010

Installing and configuring Cacti

Cacti is a comprehensive monitoring tool for network
equipment resource utilization. We can view utilization graphs of incoming and outgoing traffic on the ports of any networking equipment which supports SNMP ( any of the three versions).

In this post, I shall mention the steps involved in installing and getting Cacti run on a Debian based machine.


1. Prerequisite packages : Apache, Mysql, PHP, Basically if the LAMP package is installed it would suffice. Installing LAMP is very straightforward using tasksel. Run the command and select the LAMP server option as shown


root@fossphosis:~# tasksel 





2. Package dependecies to install Cacti: cacti, cacti-cacid, spine, rrdtool
The dependecies are taken care of by APT. So, just type in apt command as shown.

3. While the packages are being installed, certain essential configurations have to be performed.
 i) Configuring the MYSQL database entry for Cacti. This can be performed using dbconfig. To do this select the option as shown

 ii) Enter the adminsitrative password for MYSQL and also for the user 'cacti'


 iii)Select the webserver for running Cacti. Choose Apache2

4. If things have gone fine, without any errors,  check the following URL to reach the Cacti Screen

http://localhost/cacti
Congrats :)

5. Now, that we have installed Cacti, some simple basic configurations have to be performed




Finally, click on graphs to see if Cacti is working fine. 
After about 30 minutes the resource utilization of the local host can be observed 


Tuesday, October 19, 2010

Resetting mysql root password

As I had mentioned in my previous post, I was composing a post about installing and configuring Cacti. It so happened that I had forgotten my mysql root password on my system. And, as mysql is an essential back-end application for Cacti to run, I had to retrieve it. But, even after multiple attempts I didn't seem to be able to recollect it!

So, I had to reset the root password. After reading a couple of forums I was able to do it. As it seemed non-trivial, I decided to put it up here.

Follow these steps to reset the mysql root password in a GNU/Linux machine.

1. Log into the GNU/Linux machine where the mysql is running.

2. Stop the mysql daemon

raghu@fossphosis:~$ sudo service mysql stop 
mysql stop/waiting

3. After stopping the mysql daemon, create a text file with the following content

UPDATE mysql.user SET Password=PASSWORD('$new_password') WHERE User='root';
FLUSH PRIVILEGES;

The UPDATE statement resets the password for all root accounts, and the FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.

4. Save this text file which now has the new password, call it maybe mysql-reset.

5. Start mysql with --init-file option:

raghu@fossphosis:$ sudo mysqld_safe --init-file=/home/raghu/mysql-reset &

By doing this we are getting the mysql server to execute the contents of the mysql-reset text file, wherein the root passwords will be reset and assigned with the new value specified in the text file i.e., $new_password.

6. After the server starts successfully, delete the mysql-reset text file, for it has the new password in plain text.

7. You should be now able to log in to mysql as root with the new password.

raghu@fossphosis:~$ sudo mysql -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 82
Server version: 5.1.41-3ubuntu12.6 (Ubuntu)


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> 


This is a very helpful procedure for someone like me, who keeps forgetting the paswwords ;-)

Saturday, October 9, 2010

SNMP based Network monitoring

This is an introductory post to the SNMP based network management series of posts. In this post I would want to elaborate on the Network monitoring tools, which use SNMP(Simple Network Management Protocol)  to poll and map network devices with statistics and graphs.

Before delving into tools such as net-snmp, mrtg, cacti and nagios, a brief insight into the concepts and terminologies of SNMP in this post would help us appreciate the tools better.

Simple Network Management Protocol (SNMP) is a UDP-based network protocol. It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. (wiki)

Simple Network Management Protocol is highly useful in monitoring the health, statistics and graphing resource utilization of network devices like switches, routers, data-multiplexers,  Ethernet-access devices, modems, servers and most of the other electronic devices which are part of the network.

In GNU/Linux machines, by default the ports 161 and 162 are used for SNMP.



An SNMP-managed network consists of three key components:
  • Managed device, the device to be monitored/managed like a switch/router
  • Agent — software which runs on managed devices, like snmpd on Linux machines
  • Network management system (NMS) — software which runs on the manager, like nagios, cacti
To enhance the functionality of the snmp tools to incorporate additional features like configuration of specific devices, the concept of MIB is very helpful. MIB stands for Management Information Base, which increases the feature accessibility of devices being managed by SNMP.

SNMP protocol has had its growth with three versions until now.


SNMPv1: This is the first version of the protocol

SNMPv2c: This is the revised protocol, which includes enhancements of SNMPv1 in the areas of protocol packet types, transport mappings, MIB structure elements but using the existing SNMPv1 administration structure ("community based" and hence SNMPv2c).

SNMPv3: SNMPv3 defines the secure version of the SNMP. SNMPv3 also facilitates remote configuration of the SNMP entities.

In the posts to follow, we shall take a look at each of the important GNU/Linux utilities pertaining to SNMP

Monday, October 4, 2010

Mausezahn: The Versatile Packet Crafter

What do you resort to when you want to bombard a network interface with a Broadcast packet storm, in a controlled manner? Get to
Mausezahn!


Does it sound like a title of a monarch from the medievals? Actually it is one of the most versatile packet crafter available today.

Mausezahn, literally means Mause (mouse) Zahn( Tooth)! Well, if you get to use the tool and have observed mice, you might understand the peculiarity in the name :)

mausezahn (mz) is one of the most versatile and robust packet generator around. It is used in various special scenarios:

  • Versatile and fully customizable packet generation
  • Penetration testing of firewalls and IDS
  • Finding weaknesses in network software or appliances
  • Creation of malformed packets to verify whether a system processes a given protocol correctly, i.e, to create the "impossible packets"!
  • Didactical demonstrations as lab utility
  • Performing stress tests on the network equipments
Packets can be crafted with absolute flexibility, by simple options to customize
  • Type of packet
  • Source and destination ports
  • Source and Destination MAC addresses
  • Source and Destination IP addresses
  • Delay between packets
  • Number of packets
  • ASCII Payload for packets
  • Length of the payload
  • VLAN's, QOS (Quality of Service) and COS ( Cost of Service) of packets
  • Setting flags in the packets
  • And other advanced packets, like the CDP(Cisco Discovery Packets)
You can install mz in Debian machines from the Universe repositories, by apt'ing for the package.


raghu@fossphosis$ sudo apt-get install mz 
(It has a couple of library dependencies which will be satisfied automatically by APT)


Examples 
(All instances of mz must be run as root, or in a Debian based machine for a non-root user sudo'ed):

  1. Broadcast storm at maximum rate
raghu@fossphosis$ sudo mz eth0 -c 0 -b bcast
( It generates packets at a rate limited by the system clock!)

2. Send BPDU packets with $VLAN (2-4096)  every 2 second to announcing the Root Bridge status in a STP (Scanning Tree Protocol) scenario,

raghu@fossphosis$ sudo mz eth0 -c 0 -d 2s -t bpdu vlan=$VLAN

3. Send  IP  multicast  packets to the multicast group 230.1.1.1 using a UDP header with destination port 32000, at a rate of one frame every 10 msec:


raghu@fossphosis$ sudo mz eth0 -c 0 -d 10msec -B 230.1.1.1 -t udp dp=32000 -P "Multicast test packet"

Many more varieties of diverse packets, resulting from all the permutation and combination of all the fields in a packet can be easily created, in manners as shown above. 
Malformed Broadcast storm packet generated by mausezahn
Reference:  http://www.perihel.at/sec/mz/mzguide.html 

Thursday, September 30, 2010

Essential utilities for working with Layer 3 and Layer 4 devices

Networking equipments including switches and routers are the building blocks of any computer network. And if you are one lucky person who gets to meddle with these, there is nothing better than a GNU/Linux machine with few utilities to help you go inside out of these devices, traversing the entire network.

1. Serial port interface for configuration

gtkterm:

This is a cool tool, equivalent of the hyperterminal in Windows machines. You can select any of the available serial ports and configure the baud rate of the selected port with much ease. \

One negative point is users cannot directly copy/paste into the gtkterm terminal. But, we can send raw data by putting it into a text file and using the option "send raw data" in the file menu.

putty:

Needless to mention the versalile nature of this utility, which can be used to send raw data, telnet, ssh, remote login and also to access the serial ports.

I use this utility when I have to log all the configuration I do. Instead of dumping all configuration to the default log file, we can point it to any desired file ( in terminal)

putty -l /home/USER/LOGFILE


2. Packet sniffing and capture:

wireshark:

If the need is to capture and study every packet impinging on the network interface, there ain't another utility as useful as the wireshark (in GUI).
In a Debian machine, when not in root mode, wireshark does not give users the permissions to sniff the packets on the network interfaces, and hence must be run as sudo

sudo wireshark in terminal or gksudo wireshark in run prompt(alt+F2)



tcpdump:

But, when you do not have the privileges of a GUI( which is the case most of the times) and have remotely logged onto a server, the best tool to sniff packets is tcpdump. With its rich filtering options, it is the most handy tool to sniff packets. The comprhensive man page of tcpdump would throw more light on the usability.



With all these utilities and a GNU/Linux machine you are all well equipped to invade the networks :)

Addendum to original post (30/09/10)


Well this is the marvel of the community. With this post and interactions with peers, discovered an efficient terminal based packet sniffer and analyzer !

tshark ( terminal version of wireshark)
By running the next command on the terminal, you see beautiful deciphered packet data on your screen:) Pipe it and read through the entire stuff :)

raghu@fossphosis# tshark -i eth0 -V 
 
tshark packet capture

Tuesday, September 28, 2010

Configuring SSL on Apache2 with self signed SSL certification

Apache is one of the coolest things that has happened in FOSS, and now that about 70% of web servers run Apache, it is now a nondetachable component of the Web.

Apart from its ubiquity, Apache is very handy to simulate and test network conditions.
Like recently, I had to work with two local Secure Socket Layer (SSL) web servers, i.e, servers which can be accessed only by "https://URL". The servers using SSL protocol are different from conventional HTTP protocol for they operate on port 443 instead of port 80 (for http), and are by design more secure.

Configuring Apache2 is straight forward, and I was already running it on my system. To get it serve content on port 443 (SSL), I had to perform a few simple steps.

(Presumptions: Apache2 is already configured and running on a Debian based Linux machine)
(You should perform all these operations as root/ or perform as sudo as shown below)

1. Creating self-signed certificates for SSL

* Install ssl-cert package, if not installed already. In Debian based machines

raghu@fossphosis$ sudo apt-get install ssl-cert

* Creating a default self-signed certificate, in the default directory /etc/ssl/certs

raghu@fossphosis$ sudo make-ssl-cert generate-default-snakeoil --force-overwrite

2. Enabling the locations which can be accessed by clients on SSL

To use the default option, perform

raghu@fossphosis$ sudo a2ensite default-ssl

This lets users access content from /var/www on SSL

3. Enabling SSL module itslef for Apache2

raghu@fossphosis$ sudo a2enmod ssl

4. Finally, restart Apache2 server for the changes to take place

raghu@fossphosis$ sudo /etc/init.d/apache2 restart

5. Test it by requesting for a https URL in your browser
https://localhost/
SSL Error - Google Chrome
Do acknowledge the signature verification of the certificate to browse the content;
https:--localhost - Google Chrome
Happy Secure Socket Layer serving :)