As we have setup centralized local yum repository server on first node, now we have to copy repo files pointing to local yum repository server on all the nodes in the cluster. We will only setup on first 7 servers as the last one is reserved for the task where we will see the process to add additional servers to the existing cluster.
- If you have to use scp to copy repo files from bigdataserver-1 to other servers, you need to first run scp and then sudo mv to /etc/yum.repos.d. You should be aware of this approach for certification purpose.
- In enterprises you need to use DevOps tools such as ansible to perform these type of repetitive tasks.
- In our working directory /home/itversity/setup_cluster on the host run
mkdir -p files/etc/yum.repos.d
- Make sure we do not have bigdataserver-8 as part of inventory group all in hosts file.
- Copy repo file contents to
files/etc/yum.repos.d
files/etc/yum.repos.d/cloudera-manager.repo
andfiles/etc/yum.repos.d/cloudera-cdh5.repo
- Synchronize repo files on to all the hosts and validate by going to base urls mentioned in repo files –
ansible all -i hosts -m synchronize -a "src=files/etc/yum.repos.d dest=/etc" --become --private-key=~/.ssh/google_compute_engine
- Also you can run
yum repolist
command using ansible to see new repositories.ansible all -i hosts -a "yum repolist" --become --private-key=~/.ssh/google_compute_engine
By now you should have set up local yum repository on bigdataserver-1, have decent idea about yum, local repositories and also copy repo files pointing to local yum repository server on to all the nodes.