On Wed, Mar 30, 2011 at 1:38 PM, Igor Tatarinov <i...@decide.com> wrote:
> I haven't found a good description on this setting and the costs in setting
> it too high. Hope somebody can explain.
> I have about a year's worth of data partitioned by date. Using 10 nodes and
> setting xcievers to 5000, I can only save into 100 or so partitions. As a
> result, I have to do 4 rounds of saving data into the underlying partitioned
> table (in s3). That's pretty slow.
> Should I just set xcievers to 1M or will hadoop crash a result? Is each
> xciever really a separate thread?
>
> When will the spelling be corrected? :)
> Thanks a bunch!
>
>

The default for hive.exec.max.dynamic.partitions.pernode is 100. This
can be safely raised. I use these settings. with xcievers @ 8k

<property>
  <name>hive.exec.dynamic.partition.mode</name>
  <value>strict</value>
  <description>In strict mode, the user must specify at least one
static partition in case the user accidentally overwrites all
partitions.</description>
</property>

<property>
  <name>hive.exec.max.dynamic.partitions</name>
  <value>300000</value>
  <description>Maximum number of dynamic partitions allowed to be
created in total.</description>
</property>

<property>
  <name>hive.exec.max.dynamic.partitions.pernode</name>
  <value>10000</value>
  <description>Maximum number of dynamic partitions allowed to be
created in each mapper/reducer node.</description>
</property>

Reply via email to