Add JournalNode to Ambari Managed Hadoop Cluster

By now Apache Ambari needs no introduction. If you are working in Big Data, then Ambari should not be foreign to you. The focus of this article, however is to see how you can add JournalNode to Ambari managed Hadoop cluster.

Apache Ambari Overview

 
Apache Ambari is a framework able to provision, manage, and monitor Apache Hadoop deployments. It brings everything in the Hadoop ecosystem under one roof either by using its easy-to-use web based user interface or through making use of its collection of RESTful APIs. Ambari’s web interface was built with simplicity in mind. The goal of Ambari is to make provisioning, managing, and monitoring as easy as possible. The web interface is actually calling Ambari APIs, which is where the magic is really happening. These APIs can be used to automate a cluster installation with absolutely zero user interaction.

Apache Ambari Project

Hadoop & Apache Ambari

 
Ambari is completely open-source tool which is it’s biggest advantage. This means we have number of contributors helping Ambari become better and robust framework tool. Ambari provides an ultimate platform framework which you can use to provision, manage and monitor a Hadoop cluster.

Ambari – even with all it’s prowess to manage a Hadoop cluster, is still evolving. The current distribution of Ambari provides a near complete solution to provision a resilient and highly available Hadoop cluster. One of the key areas for ensuring Hadoop is highly available is by setting up a HDFS High Availability Architecture. Ambari provides an interface to setup HDFS High Availability (also known as NameNode HA).

hadoop

Hadoop HDFS High Availability Architecture

 
Before we proceed with “How to add a JournalNode”, I would like to provide a brief overview of “How HDFS High Availability Architecture looks like”.

The following diagram provides a layout of HDFS High Availability Architecture.
 

HDFS High Availability Architecture Overview
HDFS High Availability Architecture Overview

 
In this architecture, we two NameNode configured. One of the NameNode is in Active state, while the other is in a Standby state. The active NameNode performs all the client operations which includes serving the read and write requests. The standby NameNode maintains it’s state in order to ensure a fast failover, in the event active NameNode goes down.

The standby NameNode communicates and synchronize with the active NameNode through a group of hosts called JournalNodes. JournalNodes receive the filesystem journal (or transactions) from the Active NameNode. The Standby NameNode reads these journals and updates it’s namespace to be in sync with the Active NameNode.

The architecture design allows all the DataNodes to send their block reports and heartbeat to both the NameNodes. This ensures the standby NameNode is “completely” in sync with the active NameNode.

The ZooKeeper Failover Controller (ZKFC) is responsible for HA Monitoring of the NameNode service. It also triggers an automatic failover when the Active NameNode is unavailable. Each NameNode has a ZKFC processes. ZKFC uses the Zookeeper Service for coordination in determining which is the Active NameNode and in determining when to failover to the Standby NameNode.

Quorum journal manager (QJM) in the NameNode writes file system journal logs to the journal nodes. A journal log is considered successfully written only when it is written to majority of the journal nodes. A any given point of time, we can have only one NameNode performing the quorum write. In the event of split-brain scenario this ensure that the file system metadata will not be corrupted by two active NameNodes.

Add JournalNode to Ambari

 
The HDFS High Availability Architecture setup wizard provides you an option to assign a few nodes to run as JournalNodes. You pretty much do not find the need to add more JournalNodes to the cluster and HDFS High Availability works perfectly. However, over the period of time with the growth of cluster size and number of nodes or for ensuring more resilience, you may want to add more JournalNodes.

As of the time of writing this article, Ambari does not provide a way to add JournalNodes to HDFS High Availability Architecture in a Hadoop cluster. The good thing is Ambari is RESTful!

Ambari’s RESTful API provide us the ability which the Ambari web interface does not provide. The following is the brief overview of the entire process:
 

  • Identify a node which would be shouldering the responsibility of a JournalNode.
  • Assign the JournalNode role to the host using POST API call.
  • Install the JournalNode on the new host using PUT API call.
  • Update the “Shared Edits” location for pdfs-site (dfs.namenode.shared.edits.dir parameter) and add the new JournalNode.
  • Create the JournalNode directory on the host (including the directory for HDFS NameService). Sync the contents of current directory (basically all edits, edits_inprogress, epoch, committed txns). Make sure the ownership is set right (user owned by hfs and group owned by hadoop).
  • If the cluster is secure, make sure you create the journalnode principal and HTTP principal (which should be present in SPNEGO) for this host. Ensure the keytab files are present and have right ownership and permissions.
  • Lastly, start the JournalNode on the new host and restart HDFS components.

We will discuss each of the above process in detail now.

Please note that I will be using “placeholder” values in these commands. You will have to replace them with the values for your setup. Replacements that you will have to work on:

* admin:admin – with the username and password you’ve set for Ambari. I am using the default username and password in my examples.
* CLUSTER_NAME – with the cluster name you have set for your cluster in Ambari.
* NEW_JN_NODE – with the actual hostname which you have added. It can be FQDN or short name which is based on what you have added the host with.

Once you identify the host which will be your new JournalNode host, you can run the following set of curl commands. I prefer to work on the host where Ambari server is running. So when I make a API call to Ambari, the host address in curl command is ‘localhost’.

Assign JournalNode

 
Assign the role of JournalNode using the following command:

curl -u admin:admin -H ‘X-Requested-By: Ambari’ -X POST http://localhost:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE

Install Journalnode

 
Now go ahead and install the JournalNode.

curl -u admin:admin -H ‘X-Requested-By: Ambari’ -X PUT -d ‘{“RequestInfo”:{“context”:”Install JournalNode”},”Body”:{“HostRoles”:{“state”:”INSTALLED”}}}’ http://localhost:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE

Update HDFS Configuration

 
Login to Ambari Web UI and modify the HDFS Configuration. Search for dfs.namenode.shared.edits.dir and add the new JournalNode. Make sure you don’t mess up the format for the journalnode list provided. The following is a format of a typical 3 JournalNode shared edits definition.
 

qjournal://my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485/MyLAB

 

Create JournalNode Directories

Time to create the required directory structure on the new Journalnode. You have to create this directory structure based on your cluster installation. If unsure, you can find this value from $HADOOP_CONF/hdfs-site.xml file. Look for the parameter value for dfs.journalnode.edits.dir. In my case, it happens to be /hadoop/qjournal/namenode/.

Make sure you add the HDFS Nameservice directory. You can find this value from $HADOOP_CONF/hdfs-site.xml file. The value can be found for parameter dfs.nameservices. In my example, I have “MyLab”. So I will create the directory structure as /hadoop/qjournal/namenode/MyLab.

Copy or Sync the directory ‘current’ under the ‘shared edits’ location from an existing JournalNode. Make sure that the ownership for all these newly created directories and sync’ed files is right.

Kerberos Enabled Cluster Or Secure Cluster Only

If you don’t have a secure cluster, jump on to the next section.

As mentioned above, the process for a secure cluster is the same with the additional caveat of ensuring right principals and their keytab file is present. Based on the journalnode principal and HTTP principal for JournalNode that you’ve defined for the cluster, create the principal on the KDC for your cluster. Get the keytab for the principals.

Verify that the keytab actually works using klist & kinit.

Moment of Truth!

Time to fire up the JournalNode on the new host. It’s the moment of truth!

Using Ambari Web UI or API call start the JournalNode service.

I prefer the Ambari API. I would use something like this.

curl -u admin:admin -H ‘X-Requested-By: Ambari’ -X PUT -d ‘{“RequestInfo”:{“context”:”Start JournalNode”},”Body”:{“HostRoles”:{“state”:”STARTED”}}}’ http://localhost:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE

Again, don’t forget the substitutions I talked about the curl command above.

Restart HDFS components for the changes to take effect. Basically recycling the Namenode (one after the other) should do this with no outage with the HDFS High Availability setup. But if you can afford a downtime, you can go for it.

Verify and cross check that the edits, epochs and committed txns are written across all the Journalnodes.

Voila! There you have it! A master node with Journalnode on it.

No Comments

Post a Comment

Time limit is exhausted. Please reload CAPTCHA.