Skip to content

Conversation

@jagadeesanas2
Copy link

What changes were proposed in this pull request?

Final PR for Automate Hadoop and spark installation Single node

  • Check JAVA_HOME set in environment
  • While running ./autogen.sh automatically config.sh will create with appropriate field values
  • User can enter Spark and Hadoop version interactively while running ./autogen.sh file
  • Validation for Slave IPs in network
  • Automate hadoop download, config and installation
  • Automate spark downlaod and installation
  • Interactively we can run SparkPI example and check spark installation

Copy link
Author

@jagadeesanas2 jagadeesanas2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Increased YARN_SCHEDULER_MAX_ALLOCATION_MB & YARN_SCHEDULER_MAX_ALLOCATION_VCORES default values
  • Validation JAVA_HOME set in environment if not exit from script
  • Validation for default hadoop port instances
  • Clean and validated .bashrc file for both hadoop and spark env variables

Copy link
Author

@jagadeesanas2 jagadeesanas2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Keep all default port values configurable in config.sh file
  • Validation for default port instances
  • Added checkall.sh this ensure all services are started on master & slaves
  • Output of the script to a log file.

@jagadeesanas2
Copy link
Author

Closing this PR #7
Automate Hadoop and Spark installation for single node and multi-node on single PR #8 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant