Hadoop: Instal di Ubuntu 14.04: Difference between revisions

From OnnoCenterWiki
Jump to navigationJump to search
Onnowpurbo (talk | contribs)
Onnowpurbo (talk | contribs)
 
(15 intermediate revisions by the same user not shown)
Line 33: Line 33:
  sudo addgroup hadoop
  sudo addgroup hadoop
  sudo adduser --ingroup hadoop hduser
  sudo adduser --ingroup hadoop hduser
sudo adduser hduser sudo
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.
Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang
sudo su
vi /etc/sudoers
Tambahkan
hduser  ALL=(ALL:ALL) ALL


==Install ssh==
==Install ssh==
Line 112: Line 126:
Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(
Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(


sudo su
cd /home/hduser/
mv hadoop-2.7.1 /usr/local/hadoop
chown -R hduser:hadoop /usr/local/hadoop


We want to move the Hadoop installation to the /usr/local/hadoop directory using the following command:
==Setup File Konfigurasi==


hduser@laptop:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
[sudo] password for hduser:
hduser is not in the sudoers file.  This incident will be reported.


Oops!... We got:
File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop:


"hduser is not in the sudoers file. This incident will be reported."
~/.bashrc
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/hdfs-site.xml


This error can be resolved by logging in as a root user, and then add hduser to sudo:
===~/.bashrc:===


hduser@laptop:~/hadoop-2.6.0$ su k
Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment:
Password:  


k@laptop:/home/hduser$ sudo adduser hduser sudo
update-alternatives --config java
[sudo] password for k:
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.


Now, the hduser has root priviledge, we can move the Hadoop installation to the /usr/local/hadoop directory without any problem:
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Nothing to configure.


k@laptop:/home/hduser$ sudo su hduser
Selanjutnya tambahkan di akhir ~/.bashrc:


hduser@laptop:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
vi ~/.bashrc
hduser@laptop:~/hadoop-2.6.0$ sudo chown -R hduser:hadoop /usr/local/hadoop


#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common
export HADOOP_VERSION=2.7.1
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END


Aktifkan


source ~/.bashrc


Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini


javac -version
javac 1.7.0_51


which javac
/usr/bin/javac


Setup Configuration Files
readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac


The following files will have to be modified to complete the Hadoop setup:
Maka JAVA_HOME adalah
/usr/lib/jvm/java-7-openjdk-amd64/


    ~/.bashrc
===/usr/local/hadoop/etc/hadoop/hadoop-env.sh===
    /usr/local/hadoop/etc/hadoop/hadoop-env.sh
    /usr/local/hadoop/etc/hadoop/core-site.xml
    /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
    /usr/local/hadoop/etc/hadoop/hdfs-site.xml


1. ~/.bashrc:
set JAVA_HOME di file hadoop-env.sh


Before editing the .bashrc file in our home directory, we need to find the path where Java has been installed to set the JAVA_HOME environment variable using the following command:
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh


hduser@laptop update-alternatives --config java
# The java implementation to use.
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
# export JAVA_HOME=${JAVA_HOME}
Nothing to configure.
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64


Now we can append the following to the end of ~/.bashrc:


hduser@laptop:~$ vi ~/.bashrc
===/usr/local/hadoop/etc/hadoop/core-site.xml===


#HADOOP VARIABLES START
File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada.
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END


hduser@laptop:~$ source ~/.bashrc
sudo mkdir -p /app/hadoop/tmp
sudo chown hduser:hadoop /app/hadoop/tmp


note that the JAVA_HOME should be set as the path just before the '.../bin/':
Masuk ini antara tag <configuration></configuration>:


hduser@ubuntu-VirtualBox:~$ javac -version
vi /usr/local/hadoop/etc/hadoop/core-site.xml
javac 1.7.0_75


hduser@ubuntu-VirtualBox:~$ which javac
<configuration>
/usr/bin/javac
  <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
  </property>
  <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>


hduser@ubuntu-VirtualBox:~$ readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac




===/usr/local/hadoop/etc/hadoop/mapred-site.xml===


2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
Default, folder /usr/local/hadoop/etc/hadoop/ berisi


We need to set JAVA_HOME by modifying hadoop-env.sh file.
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template


hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
yang harus di rename / di copy dengan nama mapred-site.xml:


export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml


Adding the above statement in the hadoop-env.sh file ensures that the value of JAVA_HOME variable will be available to Hadoop whenever it is started up.
File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> :


vi /usr/local/hadoop/etc/hadoop/mapred-site.xml


3. /usr/local/hadoop/etc/hadoop/core-site.xml:
<configuration>
  <property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
  </property>
</configuration>


The /usr/local/hadoop/etc/hadoop/core-site.xml file contains configuration properties that Hadoop uses when starting up.
===/usr/local/hadoop/etc/hadoop/hdfs-site.xml===
This file can be used to override the default settings that Hadoop starts with.


hduser@laptop:~$ sudo mkdir -p /app/hadoop/tmp
File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut.
hduser@laptop:~$ sudo chown hduser:hadoop /app/hadoop/tmp


Open the file and enter the following in between the <configuration></configuration> tag:
Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah:


hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
sudo chown -R hduser:hadoop /usr/local/hadoop_store


<configuration>
Buka file, dam masukan konfigurasi antara tag <configuration></configuration>:
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>


  <property>
  vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>


<configuration>
  <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
  </property>
</configuration>




4. /usr/local/hadoop/etc/hadoop/mapred-site.xml


By default, the /usr/local/hadoop/etc/hadoop/ folder contains
==Format Hadoop Filesystem==
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
file which has to be renamed/copied with the name mapred-site.xml:


hduser@laptop:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
Hadoop file sistem perlu di format sebelum kita dapat menggunakannya.
Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node.


The mapred-site.xml file is used to specify which framework is being used for MapReduce.
hadoop namenode -format
We need to enter the following content in between the <configuration></configuration> tag:


<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>


DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = wdred/192.168.0.19
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 2.7.1
STARTUP_MSG:  classpath =
..
..
..
..
..
15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false
15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false
15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true
15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and  retry cache entry expiry time is 600000 millis
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009
15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0
15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19
************************************************************/


Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system.




5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
==Menjalankan Hadoop==


The /usr/local/hadoop/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used.
Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi.
It is used to specify the directories which will be used as the namenode and the datanode on that host.
Kita dapat menggunakan


Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation.
start-all.sh
This can be done using the following commands:
start-dfs.sh
start-yarn.sh


hduser@laptop:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
Lakukan
hduser@laptop:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
hduser@laptop:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store


Open the file and enter the following content in between the <configuration></configuration> tag:
cd /usr/local/hadoop/sbin
ls


hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh        start-all.sh        stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh          start-dfs.sh        stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                  start-yarn.cmd      yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh    stop-all.cmd
slaves.sh                stop-all.sh


<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>






sudo su hduser
start-all.sh


This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out
15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out




Cek, apakah berjalan dengan baik:


Format the New Hadoop Filesystem
jps


Now, the Hadoop file system needs to be formatted so that we can start to use it. The format command should be issued with write permission since it creates current directory
7245 NameNode
under /usr/local/hadoop_store/hdfs/namenode folder:
7380 DataNode
8193 Jps
7895 NodeManager
7607 SecondaryNameNode
7758 ResourceManager


hduser@laptop:~$ hadoop namenode -format
Cek
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


15/04/18 14:43:03 INFO namenode.NameNode: STARTUP_MSG:
netstat -plten | grep java
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = laptop/192.168.1.1
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 2.6.0
STARTUP_MSG:  classpath = /usr/local/hadoop/etc/hadoop
...
STARTUP_MSG:  java = 1.7.0_65
************************************************************/
15/04/18 14:43:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/04/18 14:43:03 INFO namenode.NameNode: createNameNode [-format]
15/04/18 14:43:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-e2f515ac-33da-45bc-8466-5b1100a2bf7f
15/04/18 14:43:09 INFO namenode.FSNamesystem: No KeyProvider found.
15/04/18 14:43:09 INFO namenode.FSNamesystem: fsLock is fair:true
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/04/18 14:43:10 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Apr 18 14:43:10
15/04/18 14:43:10 INFO util.GSet: Computing capacity for map BlocksMap
15/04/18 14:43:10 INFO util.GSet: VM type      = 64-bit
15/04/18 14:43:10 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/04/18 14:43:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/04/18 14:43:10 INFO blockmanagement.BlockManager: defaultReplication        = 1
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplication            = 512
15/04/18 14:43:10 INFO blockmanagement.BlockManager: minReplication            = 1
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/04/18 14:43:10 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/04/18 14:43:10 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/04/18 14:43:10 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/04/18 14:43:10 INFO namenode.FSNamesystem: fsOwner            = hduser (auth:SIMPLE)
15/04/18 14:43:10 INFO namenode.FSNamesystem: supergroup          = supergroup
15/04/18 14:43:10 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/04/18 14:43:10 INFO namenode.FSNamesystem: HA Enabled: false
15/04/18 14:43:10 INFO namenode.FSNamesystem: Append Enabled: true
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map INodeMap
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
15/04/18 14:43:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/04/18 14:43:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map cachedBlocks
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
15/04/18 14:43:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
15/04/18 14:43:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/04/18 14:43:11 INFO namenode.NNConf: ACLs enabled? false
15/04/18 14:43:11 INFO namenode.NNConf: XAttrs enabled? true
15/04/18 14:43:11 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/04/18 14:43:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-130729900-192.168.1.1-1429393391595
15/04/18 14:43:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/04/18 14:43:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/04/18 14:43:12 INFO util.ExitUtil: Exiting with status 0
15/04/18 14:43:12 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at laptop/192.168.1.1
************************************************************/


(Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50070          0.0.0.0:*              LISTEN      1001      16893      7245/java     
tcp        0      0 0.0.0.0:8088            0.0.0.0:*              LISTEN      1001      20864      7758/java     
tcp        0      0 127.0.0.1:54680        0.0.0.0:*              LISTEN      1001      18512      7380/java     
tcp        0      0 0.0.0.0:50010          0.0.0.0:*              LISTEN      1001      18506      7380/java     
tcp        0      0 0.0.0.0:50075          0.0.0.0:*              LISTEN      1001      18728      7380/java     
tcp        0      0 0.0.0.0:8030            0.0.0.0:*              LISTEN      1001      20855      7758/java     
tcp        0      0 0.0.0.0:8031            0.0.0.0:*              LISTEN      1001      20848      7758/java     
tcp        0      0 0.0.0.0:8032            0.0.0.0:*              LISTEN      1001      20860      7758/java     
tcp        0      0 0.0.0.0:8033            0.0.0.0:*              LISTEN      1001      21067      7758/java     
tcp        0      0 0.0.0.0:52739          0.0.0.0:*              LISTEN      1001      21059      7895/java     
tcp        0      0 0.0.0.0:50020          0.0.0.0:*              LISTEN      1001      18150      7380/java     
tcp        0      0 127.0.0.1:54310        0.0.0.0:*              LISTEN      1001      17639      7245/java     
tcp        0      0 0.0.0.0:8040            0.0.0.0:*              LISTEN      1001      22344      7895/java     
tcp        0      0 0.0.0.0:8042            0.0.0.0:*              LISTEN      1001      22348      7895/java     
tcp        0      0 0.0.0.0:50090          0.0.0.0:*              LISTEN      1001      19321      7607/java


Note that hadoop namenode -format command should be executed once before we start using Hadoop.
==Monitor Job & Task==
If this command is executed again after Hadoop has been used, it'll destroy all the data on the Hadoop file system.




* NameNode daemon: http://hdnode01:50070
* JobTracker daemon: http://hdnode01:50030
* TaskTracker daemon: http://hdnode01:50060




Starting Hadoop
==Stop Hadoop==


Now it's time to start the newly installed single node cluster.
Kita dapat menjalankan
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)


k@laptop:~$ cd /usr/local/hadoop/sbin
stop-all.sh
 
stop-dfs.sh
k@laptop:/usr/local/hadoop/sbin$ ls
  stop-yarn.sh
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh        start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh          start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh stop-yarn.sh
kms.sh                  start-yarn.cmd      yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh    stop-all.cmd
slaves.sh                stop-all.sh
 
k@laptop:/usr/local/hadoop/sbin$ sudo su hduser
 
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
hduser@laptop:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-laptop.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-laptop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-laptop.out
15/04/18 16:43:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-laptop.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-laptop.out
 
We can check if it's really up and running:
 
hduser@laptop:/usr/local/hadoop/sbin$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
 
The output means that we now have a functional instance of Hadoop running on our VPS (Virtual private server).
 
Another way to check is using netstat:
 
hduser@laptop:~$ netstat -plten | grep java
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50020          0.0.0.0:*              LISTEN      1001      1843372    10605/java     
tcp        0      0 127.0.0.1:54310        0.0.0.0:*              LISTEN      1001      1841277    10447/java     
tcp        0      0 0.0.0.0:50090          0.0.0.0:*              LISTEN      1001      1841130    10895/java     
tcp        0      0 0.0.0.0:50070          0.0.0.0:*              LISTEN      1001      1840196    10447/java     
tcp        0      0 0.0.0.0:50010          0.0.0.0:*              LISTEN      1001      1841320    10605/java     
tcp        0      0 0.0.0.0:50075          0.0.0.0:*              LISTEN      1001      1841646    10605/java     
tcp6      0      0 :::8040                :::*                    LISTEN      1001      1845543    11383/java     
tcp6      0      0 :::8042                :::*                    LISTEN      1001      1845551    11383/java     
tcp6      0      0 :::8088                :::*                    LISTEN      1001      1842110    11252/java     
tcp6      0      0 :::49630                :::*                    LISTEN      1001      1845534    11383/java     
tcp6      0      0 :::8030                :::*                    LISTEN      1001      1842036    11252/java     
tcp6      0      0 :::8031                :::*                    LISTEN      1001      1842005    11252/java     
tcp6      0      0 :::8032                :::*                    LISTEN      1001      1842100    11252/java     
tcp6      0      0 :::8033                :::*                    LISTEN      1001      1842162    11252/java     




Lakukan


cd /usr/local/hadoop/sbin
stop-all.sh


 
==Hadoop Web Interface==
Stopping Hadoop
 
$ pwd
/usr/local/hadoop/sbin
 
$ ls
distribute-exclude.sh  httpfs.sh                start-all.sh        start-yarn.cmd    stop-dfs.cmd        yarn-daemon.sh
hadoop-daemon.sh      mr-jobhistory-daemon.sh  start-balancer.sh    start-yarn.sh    stop-dfs.sh        yarn-daemons.sh
hadoop-daemons.sh      refresh-namenodes.sh    start-dfs.cmd        stop-all.cmd      stop-secure-dns.sh
hdfs-config.cmd        slaves.sh                start-dfs.sh        stop-all.sh      stop-yarn.cmd
hdfs-config.sh        start-all.cmd            start-secure-dns.sh  stop-balancer.sh  stop-yarn.sh
 
We run stop-all.sh or (stop-dfs.sh and stop-yarn.sh) to stop all the daemons running on our machine:
 
hduser@laptop:/usr/local/hadoop/sbin$ pwd
/usr/local/hadoop/sbin
hduser@laptop:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh  httpfs.sh                start-all.cmd      start-secure-dns.sh  stop-balancer.sh    stop-yarn.sh
hadoop-daemon.sh      kms.sh                  start-all.sh      start-yarn.cmd      stop-dfs.cmd        yarn-daemon.sh
hadoop-daemons.sh      mr-jobhistory-daemon.sh  start-balancer.sh  start-yarn.sh        stop-dfs.sh        yarn-daemons.sh
hdfs-config.cmd        refresh-namenodes.sh    start-dfs.cmd      stop-all.cmd        stop-secure-dns.sh
hdfs-config.sh        slaves.sh                start-dfs.sh      stop-all.sh          stop-yarn.cmd
hduser@laptop:/usr/local/hadoop/sbin$
hduser@laptop:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
15/04/18 15:46:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: no secondarynamenode to stop
15/04/18 15:46:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
 
 
 
 
 
Hadoop Web Interfaces


Let's start the Hadoop again and see its Web UI:
Let's start the Hadoop again and see its Web UI:


hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
cd /usr/local/hadoop/sbin
 
start-all.sh
 
http://localhost:50070/ - web UI of the NameNode daemon
Hadoop_50070.png
 
 
 
Hadoop_50070_2.png
 
 
 
Hadoop_50070_3.png
 
 
 
SecondaryNameNode
Hadoop_SecondaryNode.png
 
(Note) I had to restart Hadoop to get this Secondary Namenode.
 
 
 
 
DataNode
Hadoop_DataNode.png
 
 
 
Hadoop_Logs.png
 
 
 
 
 
Bogotobogo's contents
 
To see more items, click left or right arrow.
 
I hope this site is informative and helpful.
 


Using Hadoop
Akses ke


If we have an application that is set up to use Hadoop, we can fire that up and start using it with our Hadoop installation!
http://localhost:50070


==Referensi==
==Referensi==


* http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
* http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

Latest revision as of 01:10, 10 November 2015

Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php


Spesifikasi Hardware & OS

Persiapan yang perlu di lakukan

  • Harddisk WD Red 6TB untuk ujicoba
  • PC Biasa 64bit
  • Install Ubuntu 14.04 64bit


Instalasi Java

Hadoop framework is written in Java!!

cd ~
sudo apt-get update
sudo locale-gen id_ID.UTF-8
sudo apt-get install default-jdk

cek versi

java -version
java version "1.7.0_51"
OpenJDK Runtime Environment (IcedTea 2.4.6) (7u51-2.4.6-1ubuntu4)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)

Tambahkan Hadoop user

sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
sudo adduser hduser sudo
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.

Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang

sudo su
vi /etc/sudoers

Tambahkan

hduser  ALL=(ALL:ALL) ALL

Install ssh

Instalasi ssh

sudo apt-get install ssh

Cek ssh

which ssh
/usr/bin/ssh

Cek sshd

which sshd
/usr/sbin/sshd


Buat & Setup Sertifikat SSH

Hadoop membutuhkan akses SSH untuk memanage nodes (mesin remote & lokal). Kita akan konfigurasi agar mengijinkan authentikasi menggunakan SSH public key.

su hduser
ssh-keygen -t rsa -P ""


Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
5c:4e:51:87:9f:00:64:a9:42:40:28:f1:b7:39:c5:04 hduser@wdred
The key's randomart image is:
+--[ RSA 2048]----+
|.. oEo.  .=+...  |
|...  o.  ...o.   |
| .. ..o  .o  o . |
|   . +...+    o  |
|    +  .S .      |
|     .           |
|                 |
|                 |
|                 |
+-----------------+


Lakukan

cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

agar Hadoop menambahkan public key yang baru kita buat agar masuk ke key yang di authorisasi agar kita tidak perlu pakai password untuk ssh.

Cek ke localhost

ssh localhost


The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)
..
..


Install Hadoop

Sebagai user biasa, lakukan:

cd ~
wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
tar zxvf hadoop-2.7.1.tar.gz

Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(

sudo su
cd /home/hduser/
mv hadoop-2.7.1 /usr/local/hadoop
chown -R hduser:hadoop /usr/local/hadoop

Setup File Konfigurasi

File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop:

~/.bashrc
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/hdfs-site.xml

~/.bashrc:

Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment:

update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Nothing to configure.

Selanjutnya tambahkan di akhir ~/.bashrc:

vi ~/.bashrc
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common
export HADOOP_VERSION=2.7.1
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

Aktifkan

source ~/.bashrc

Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini

javac -version
javac 1.7.0_51
which javac
/usr/bin/javac
readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac

Maka JAVA_HOME adalah

/usr/lib/jvm/java-7-openjdk-amd64/

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

set JAVA_HOME di file hadoop-env.sh

vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.
# export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64


/usr/local/hadoop/etc/hadoop/core-site.xml

File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada.

sudo mkdir -p /app/hadoop/tmp
sudo chown hduser:hadoop /app/hadoop/tmp

Masuk ini antara tag <configuration></configuration>:

vi /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
</configuration>


/usr/local/hadoop/etc/hadoop/mapred-site.xml

Default, folder /usr/local/hadoop/etc/hadoop/ berisi

/usr/local/hadoop/etc/hadoop/mapred-site.xml.template

yang harus di rename / di copy dengan nama mapred-site.xml:

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> :

vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
 <property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut.

Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah:

sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
sudo chown -R hduser:hadoop /usr/local/hadoop_store

Buka file, dam masukan konfigurasi antara tag <configuration></configuration>:

vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>


Format Hadoop Filesystem

Hadoop file sistem perlu di format sebelum kita dapat menggunakannya. Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node.

hadoop namenode -format


DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = wdred/192.168.0.19
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath =
..
..
..
..
..
15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false
15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false
15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true
15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and  retry cache entry expiry time is 600000 millis
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009
15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0
15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19
************************************************************/

Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system.


Menjalankan Hadoop

Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi. Kita dapat menggunakan

start-all.sh
start-dfs.sh
start-yarn.sh

Lakukan

cd /usr/local/hadoop/sbin
ls
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh         start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh           start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                   start-yarn.cmd       yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh     stop-all.cmd
slaves.sh                stop-all.sh



sudo su hduser
start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out
15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out


Cek, apakah berjalan dengan baik:

jps
7245 NameNode
7380 DataNode
8193 Jps
7895 NodeManager
7607 SecondaryNameNode
7758 ResourceManager

Cek

netstat -plten | grep java
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       16893       7245/java       
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      1001       20864       7758/java       
tcp        0      0 127.0.0.1:54680         0.0.0.0:*               LISTEN      1001       18512       7380/java       
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      1001       18506       7380/java       
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      1001       18728       7380/java       
tcp        0      0 0.0.0.0:8030            0.0.0.0:*               LISTEN      1001       20855       7758/java       
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN      1001       20848       7758/java       
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN      1001       20860       7758/java       
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN      1001       21067       7758/java       
tcp        0      0 0.0.0.0:52739           0.0.0.0:*               LISTEN      1001       21059       7895/java       
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      1001       18150       7380/java       
tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1001       17639       7245/java       
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      1001       22344       7895/java       
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      1001       22348       7895/java       
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       19321       7607/java

Monitor Job & Task


Stop Hadoop

Kita dapat menjalankan

stop-all.sh
stop-dfs.sh
stop-yarn.sh


Lakukan

cd /usr/local/hadoop/sbin
stop-all.sh

Hadoop Web Interface

Let's start the Hadoop again and see its Web UI:

cd /usr/local/hadoop/sbin
start-all.sh

Akses ke

http://localhost:50070

Referensi