You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, we have a setup in our data center that requires both MAGPIE_HOSTNAME_CMD and MAGPIE_NO_LOCAL_DIR to be used. However, the no-local-dir patches use myhostname=`hostname`, so they do not respect MAGPIE_HOSTNAME_CMD or MAGPIE_HOSTNAME_CMD_MAP. As a result, there is a mismatch between the config file directories used by Magpie and the ones that e.g. Hadoop or Spark recognize.
I'm willing to contribute a PR to fix this, though I'm not 100% sure what the best approach is. For Hadoop, I think we could add a magpie_hostname to the hadoop-user-functions.sh and use that for setting myhostname in all the setup scripts. For other software (taking Spark as example), I think we can add MAGPIE_HOSTNAME_CMD and MAGPIE_HOSTNAME_CMD_MAP to spark-env.sh, and add a magpie_hostname function to each of the setup scripts.
Does that approach make sense to you, @chu11? And if so, would you like me to draft a PR implementing this?
The text was updated successfully, but these errors were encountered:
Hi, if MAGPIE_HOSTNAME_CMD does not work with MAGPIE_NO_LOCAL_DIR, you are right, that just seems to be an oversight.
Any patch you can provide would be welcome. A test variant would also be welcome if you can figure it out (the testsuite is hard to navigate, if you can't, I can try and add it if it's too cumbersome for you).
Hmm, I've spent a bit of time looking into this issue, and unfortunately the fix is not as easy as I'd hoped. Hadoop and Spark first need the *_CONF_DIR before they can load user-defined functions or environment variables. However, we need those user-defined values before we can use MAGPIE_HOSTNAME_CMD.
I'm not quite sure how we could patch the Hadoop and Spark scripts to ensure the different hostname commands are incorporated, without hardcoding these values into the patch and having to patch the software again for every different configuration of MAGPIE_HOSTNAME_CMD. Do you happen to have any ideas?
apologies, i forgot to respond when this first came up then it got lost.
Hmmm, yeah it's a tough problem. Perhaps, we could define a location to have a script? Like ~/.magpie/get_my_local_hostname.sh. If that file doesn't exist, assume hostname?
Hello, we have a setup in our data center that requires both
MAGPIE_HOSTNAME_CMD
andMAGPIE_NO_LOCAL_DIR
to be used. However, theno-local-dir
patches usemyhostname=`hostname`
, so they do not respectMAGPIE_HOSTNAME_CMD
orMAGPIE_HOSTNAME_CMD_MAP
. As a result, there is a mismatch between the config file directories used by Magpie and the ones that e.g. Hadoop or Spark recognize.I'm willing to contribute a PR to fix this, though I'm not 100% sure what the best approach is. For Hadoop, I think we could add a
magpie_hostname
to thehadoop-user-functions.sh
and use that for settingmyhostname
in all the setup scripts. For other software (taking Spark as example), I think we can addMAGPIE_HOSTNAME_CMD
andMAGPIE_HOSTNAME_CMD_MAP
tospark-env.sh
, and add amagpie_hostname
function to each of the setup scripts.Does that approach make sense to you, @chu11? And if so, would you like me to draft a PR implementing this?
The text was updated successfully, but these errors were encountered: