Hi, I'm running hive job on 24TB dataset (on 34560 partitions ). here about 500 to 1000 mappers are getting succeded (total of 80000) and rest mappaers are taking for ever ( their status stays at 0% all times ). Is there any limitations on number of partitions/dataset ? are there any paraemeters to set here?
Same job is suceeding on 18TB (25920 partitions ). I already set below in my hive query. set mapreduce.jobtracker.split.metainfo.maxsize=-1; Regards, Srinivas