Search This Blog

Saturday, December 22, 2012

Accessing Eucalyptus 2.x DataBase


Hi,
I just thought this might be useful, if anyone who wants to play around with eucalyptus 2.x database. In these version eucalyptus uses HSQLDB as its database.

Step 1: Get into the following folder and we need to get the eucalyptus database password

<installed eucalyptus folder>/var/lib/eucalyptus/db/
cat eucalyptus_general.script | grep "CREATE USER SA PASSWORD"

Copy only the password part from it

Example if the output is

CREATE USER SA PASSWORD "81564841F531D0ED828B825158DA0C56723113C49D4E259B6C4CCC1E698934A1F3321062B5187BC76518F85C63FAA4D051A4BE072A093191759C4AF3B5E477576182C49AF8994C115963F3AEC78A706601A17AF2ABE22EC7398CBB046E705743F620ED990B0642196888A0684F49AD2EEF8D34A2F2FA1B5A0D7B231EBD07253AB98F8B9E97E22B0FD6612C37ED666A122ADFD1DCB740478F3CD46AB4F2350E956B7957DAA45EFCD30C9D04A048711A01FC1C2DA7557634C357B5AD9266A18CD4A071670E873651A7E77286E22AEE0736B892EEACE22C1A9E15AD113B9EBC43031EE0AB4856768443B4A3EC32A27CAD37627BDAFBB0C75822E7E58A7C3CD38667"

then password is

81564841F531D0ED828B825158DA0C56723113C49D4E259B6C4CCC1E698934A1F3321062B5187BC76518F85C63FAA4D051A4BE072A093191759C4AF3B5E477576182C49AF8994C115963F3AEC78A706601A17AF2ABE22EC7398CBB046E705743F620ED990B0642196888A0684F49AD2EEF8D34A2F2FA1B5A0D7B231EBD07253AB98F8B9E97E22B0FD6612C37ED666A122ADFD1DCB740478F3CD46AB4F2350E956B7957DAA45EFCD30C9D04A048711A01FC1C2DA7557634C357B5AD9266A18CD4A071670E873651A7E77286E22AEE0736B892EEACE22C1A9E15AD113B9EBC43031EE0AB4856768443B4A3EC32A27CAD37627BDAFBB0C75822E7E58A7C3CD38667

Step 2: Next we can use the hsqldb manager to access database

# <installed eucalyptus folder>/usr/share/eucalyptus
java -cp log4j-1.2.15.jar:eucalyptus-db-hsqldb-ext-2.0.2.jar:hsqldb-1.8.0.10.jar:proxool-0.9.1.jar:ehcache-core-1.7.2.jar:commons-logging-1.1.1.jar org.hsqldb.util.DatabaseManager

Note: The above line invokes the hsqldb manager. Make sure all the jar files mentioned above exists in the current folder. With each version of eucalyptus, there may be some upgrade in jar files version too. The above line is for eucalyptus 2.0.2
          Also note that the manager may still be invoked if some jar files are missing, but it may not let you log into the database.


Step 3: Fill in the following details in the manager window that pops up

Type: HSQL Database Engine In-Memory
Driver: org.hsqldb.jdbcDriver
URL: jdbc:hsqldb:hsqls://localhost:9001/eucalyptus_general
user: SA
password: <copied in step 1>

In the above URL eucalyptus_general is one of the databases used by eucalyptus. The list of database names are given below.

1. eucalyptus_storage
2. eucalyptus_auth
3. eucalyptus_config
4. eucalyptus_general
5. eucalyptus_dns
6. eucalyptus_images
7. eucalyptus_walrus
8. eucalyptus_records

Step 4: Click on "OK" to login and a window pops up where you can write sql queries to view/edit them.

Accessing Eucalyptus 3.1/3.2 DataBase

Hi Folks,
There may be a need to open up Eucalyptus 3.1/3.2 DataBase in-order either to tweak, or add new features based existing schema ( suggested for good practices :P). So let me tel you the step by step guide to open it up.

Step 1: We don't have the password generated by eucalyptus at the time of initialization of the cloud. So first we make a small tweak to find the secret password

# cd <installed eucalyptus directory>/etc/eucalyptus/cloud.d/scripts
# nano setup_db.groovy

Get to line no: 138 
Next to the line containing
LOG.debug("Postgres 9.1 command : " + args.toString( ) )

Add these lines

final File passFile1 = new File("<installed eucalyptus directory>/password.txt")
passFile1.write( getPassword() )

Step 2: Restart eucalyptus cloud

# <installed eucalyptus directory>/etc/init.d/eucalyptus-cloud restart

Step 3: Now open the file password.txt and this is your password for the database access

# cat <installed eucalyptus directory>/password.txt

Step 4: Next we can straight away try to access the database by doing the following prerequisites

# chmod 777 <installed eucalyptus directory>/var/lib/eucalyptus/db/
# chmod 777 <installed eucalyptus directory>/var/lib/eucalyptus/db/data
# chmod 777 <installed eucalyptus directory>/var/lib/eucalyptus/db/data/.s.PGSQL.8777

# su postgres
# export PGPASSWORD="<password displayed/copied in step 3>"

Step 5: Finally we can use different set of psql commands to view/use the eucalyptus database

example to list the databases present

# psql -l -p 8777 -h <installed eucalyptus directory>/var/lib/eucalyptus/db/data/ -U eucalyptus

example to open on of its database

# psql -p 8777 -h <installed eucalyptus directory>/var/lib/eucalyptus/db/data/ -U eucalyptus eucalyptus_general

Thats it this how you can play around with the eucalyptus database. (version 3.1/3.2)
This can extend to next to eucalyptus versions too.. :)

Tuesday, December 11, 2012

Reverse proxy using apache2

Hi all, once again a step by step instruction on how to make your apache2 software work as a reverse proxy as well.

The steps here are for debian or ubuntu based distro. Replacing some system commands for other distros must give you a similar response.

Step 1: Install apache2

# apt-get install apache2

Step 2: Install modprxoy

# apt-get install libapache2-mod-proxy-html

Step 3: Edit httpd.conf

# nano /etc/apache2/httpd.conf

and paste the following contents

  LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
  LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so
  Include /etc/apache2/proxy.conf

The above step loads modproxy modules into apache2 and includes the reverse proxy mapping file

Step 4: Create a file proxy.conf

# nano /etc/apache2/proxy.conf

and paste the following data (with editting path for your application)

  ProxyPass  /<name to map> http://<host ip>:<port>/<application>
  ProxyPassReverse  /<name to map> http://<host ip>:<port>/<application>

Step 5: Restart apache2

# /etc/init.d/apache2 restart

Thats it your application which was accessed via specific port is now accessible via apache2 default port with the mapped name. :)

Thursday, October 25, 2012

Accessing XCP through libvirt API

Libvirt has enhanced its accesability to XCP or XenServer via its API. Inorder to use this feature you must compile libvirt from source with this feature enables. below are the steps to compile libvirt and its dependencies.

Step 1: There are two modes of accessing XCP via libvirt.
            * From the same machine where XCP is installed
            * On a remote machine

            But i would advice you to install libvirt on the same machine where XCP is installed as some native codes might require direct or local access.

Now,

Log in to the machine where you want to install libvirt.

Step 2: First you need to download and install xenapi package. Download xenapi package (libxenserver) from the following URL

ftp://193.166.3.2/pub/NetBSD/packages/distfiles/libxenserver-5.6.100-1-src.tar.bz2

Step 3: Open a terminal and extract the source

# cd <directory where it is downloaded>
# tar jxvf libxenserver-5.6.100-1-src.tar.bz2
# cd libxenserver

Step 4: Compile libxenserver. To compile you must have installed the following dependencies

* libxml2-dev
* libcurl3-dev
* zlib1g

in a debian/ubuntu based machine you can install like this

# apt-get install libxml2-dev
# apt-get install libcurl3-dev
# apt-get install zlib1g

There is a small change you have to make in the 'Makefile'

# vi Makefile

find the line (line no: 65)

$(TEST_PROGRAMS): test/%: test/%.o libxenserver.so
        $(CC) $(LDFLAGS) -o $@ $< -L . -lxenserver

and replace it as

$(TEST_PROGRAMS): test/%: test/%.o libxenserver.so
        $(CC) -o $@ $< -L . -lxenserver $(LDFLAGS)

Also find (line no: 73)


$(INSTALL_PROG) libxenserver.so.$(MAJOR).$(MINOR) $(DESTDIR)/usr/$(LIBDIR)

replace with

$(INSTALL_DATA) libxenserver.so.$(MAJOR).$(MINOR) $(DESTDIR)/usr/$(LIBDIR)

and then compile

# make LIBDIR=lib/
# ar rcs libxenserver.a
# make install LIBDIR=lib/

Step 5: Download libvirt from

http://libvirt.org/sources/libvirt-0.10.2.tar.gz

Step 6: Configure and compile libvirt

# cd <directory where it is downloaded>
# tar zxvf libvirt-0.10.2.tar.gz
# cd libvirt-0.10.2
# ./configure --with-esx --with-xenapi=<path to libxenserver that you installed earlier>  --prefix=/opt/libvirt

Here --prefix=/opt/libvirt is optional if you dont want it to install in system folders like /usr /bin ,etc

At the end of the above step, we must be able to identify that if libxenserver is included or not, from the output

NOTE:

1. While configuring if you find some errors that some package is missing, then install that package and reconfigure again.

eg:

configure: error: You must install device-mapper-devel/libdevmapper >= 1.0.0 to compile libvirt

Here you must install the package

libdevmapper-dev

Installation always require a development version, even though if it does not point it

2. Even though we have already installed libxenserver, sometimes the compiler cannot find LINK to libxenserver files

you can try the following solutions

# ldconfig

If this does not work, change the following code in "configure" file

# vi configure

Search for the line

        LIBXENSERVER_LIBS="$LIBXENSERVER_LIBS -lxenserver"

Scroll below to find "else" part like this

        if test "$with_xenapi" = "yes"; then
            fail=1
        fi
            with_xenapi=no

Replace it with

        with_xenapi=yes
        LIBXENSERVER_LIBS="$LIBXENSERVER_LIBS -lxenserver"

And then run the above command again.

Next to compile,

# make
# make install

You can also check the api capabilities with respect to xenapi from the following URL.

http://libvirt.org/hvsupport.html

This concludes our step by step installation of libvirt with xenapi support. :)

Friday, October 19, 2012

Hadoop Installation for Beginners

Well folks,
Here i will be giving you step by step procedures to install and configure hadoop (version 1.1.0) on a linux (debian based distro) as a single node cluster. This guide is for beginners and you need to boot into your linux machine as a root user

Step 1: First you need to download hadoop source from the following URL
http://apache.techartifact.com/mirror/hadoop/common/hadoop-1.1.1/hadoop-1.1.1.tar.gz

Open a terminal

# cd <to directory where you downloaded hadoop>
# mv hadoop-1.1.0.tar.gz /usr/local/
# cd /usr/local/
# tar zxvf hadoop-1.1.0.tar.gz

From the above commands, you have actually moved hadoop src to /usr/local and uncompressed that file in /usr/local/

Step 2: Hadoop is a standalone java based application, so it requires java 1.6 as its dependency which is to be installed by your own ( if not already installed).

Step 3: Next you need to add a specific user to associate to hadoop

# adduser hadoop

It prompts you to enter password and few other information

             Adding user `hadoop' ...
             Adding new group `hadoop' (1001) ...
             Adding new user `hadoop' (1001) with group `hadoop' ...
             Creating home directory `/home/hadoop' ...
             Copying files from `/etc/skel' ...
             Enter new UNIX password:
             Retype new UNIX password:
             passwd: password updated successfully
             Changing the user information for hadoop
             Enter the new value, or press ENTER for the default
                   Full Name []:
                   Room Number []:
                   Work Phone []:
                  Home Phone []:
                 Other []:
             Is the information correct? [Y/n] Y


Step 4: Change the configuration files

Befor we configure, type the following to identify java home

# which java

if for example output is

                /usr/bin/java
Then

your JAVA_HOME is /usr

Now,

# cd /usr/local/hadoop-1.1.0/
# cd conf/
# vi hadoop-env.sh

Find the following line

                  # export JAVA_HOME=/usr/lib/j2sdk1.5-sun

and replace it as

                  export JAVA_HOME=/usr/


Next paste the following content into the file core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
   <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:8020</value>
  </property>
</configuration>

Next paste the following content into the file hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

</configuration>

Next paste the following content into the file mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
</configuration>

Next check the file /etc/hosts if the following content exists as the first line, if not add it

127.0.0.1       localhost <your host name>

Where,
          <your host name> is the hostname of your machine.

you can find the hostname by

# hostname

Step 5: Associate user hadoop to your source folder

# cd /usr/local/
# chown -R hadoop hadoop-1.1.0

Step 6: Format HDFS file system Name node and Data Node

# cd /usr/local/hadoop-1.1.0/bin
# su hadoop
# ./hadoop namenode -format

It provides information like

12/10/19 12:00:20 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = java.net.UnknownHostException: vignesh: vignesh
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.1.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1394289; compiled by 'hortonfo' on Thu Oct  4 22:06:49 UTC 2012
************************************************************/
12/10/19 12:00:20 INFO util.GSet: VM type       = 64-bit
12/10/19 12:00:20 INFO util.GSet: 2% max memory = 17.77875 MB
12/10/19 12:00:20 INFO util.GSet: capacity      = 2^21 = 2097152 entries
12/10/19 12:00:20 INFO util.GSet: recommended=2097152, actual=2097152
12/10/19 12:00:21 INFO namenode.FSNamesystem: fsOwner=hadoop
12/10/19 12:00:21 INFO namenode.FSNamesystem: supergroup=supergroup
12/10/19 12:00:21 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/10/19 12:00:21 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/10/19 12:00:21 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/10/19 12:00:21 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/10/19 12:00:21 INFO common.Storage: Image file of size 112 saved in 0 seconds.
12/10/19 12:00:21 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
12/10/19 12:00:21 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
12/10/19 12:00:21 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
12/10/19 12:00:21 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: vignesh: vignesh
************************************************************/

Similarly format the data node by

# ./hadoop datanode -format

Step 7: Make passwordless ssh for hadoop user

# ssh-keygen -t rsa -P ""

Press enter when it promts

Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 

and it generates the key as

Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
f7:e3:1d:e6:2d:7d:23:2f:64:ea:1c:77:99:26:af:e0 hadoop@vignesh
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|                 |
|        S .      |
|         . . o  o|
|           o*oo* |
|          oo+B*+o|
|          .E..B++|
+-----------------+

# cat /home/hadoop/.ssh/id_rsa.pub > /home/hadoop/.ssh/authorized_keys
# ssh hadoop@localhost

type "yes" if it prompts as below

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 7e:4a:40:b5:57:06:0d:83:34:58:80:80:c3:e7:18:20.
Are you sure you want to continue connecting (yes/no)? 

After this it logs into hadoop user and you have successfully configured passwordless ssh

Now type

# exit

The above command must be used only once. So you are still as hadoop user

Step 8: Start Hadoop services

# ./start-all.sh

it starts 5 services

 NameNode
 SecondaryNameNode
 DataNode
 JobTracker
 TaskTracker

You can check if the services are running by

# jps

You must see something like this. If not you are facing some errors

26207 TaskTracker
26427 Jps
25847 DataNode
25986 SecondaryNameNode
26089 JobTracker
25738 NameNode

Log into 

       http://localhost:50030

for hadoop map/reduce administration (optional)

Log into 

       http://localhost:50070

for browsing the hdfs file system (optional)

Step 9: Follow these commands

# ./hadoop dfsadmin -report

This command gives you information on your hdfs system

# ./hadoop fs -mkdir test

This command creates a directory "test" in your hdfs file system

# vi test_input

 In the text editor type

 "hi all hello all"

 save and exit the file

# ./hadoop fs -put test_input test/input

This command copyies the file (test_input)  that we just created into hdfs file system (inside test folder)

#./hadoop fs -ls test

This command list all files in folder "test" of  hdfs file system.

#./hadoop jar ../hadoop-examples-1.1.1.jar wordcount test/input test/output

This command runs a mapreduce program (word count) for your input and generates output in "test/output" of hdfs file system.

You can check the output in the following url

http://localhost:50070

Browse the filesystem -> user -> hadoop -> test -> output ->part-r-00000

Step 10: To stop hadoop (optional)

# sh stop-all.sh

Here end our step by step guide to work with hadoop ( for beginners ).