Picking up this conversation again, I noticed when I hit an error in test I saw this nice stacktraces: ``` /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/hash_aggregate_test.cc:4681: Failure Failed '_error_or_value146.status()' failed with NotImplemented: Consume with nulls /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/aggregate_node.cc:400 kernels_[i]->consume(&batch_ctx, column_batch) /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/aggregate_node.cc:419 DoConsume(ExecSpan(exec_batch), thread_index) /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/aggregate_node.cc:216 handle_batch(batch, segment) /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/aggregate_node.cc:429 HandleSegments(segmenter_.get(), batch, segment_field_ids_, handler) /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/source_node.cc:119 output_->InputReceived(this, std::move(batch)) /home/icexelloss/workspace/arrow/cpp/src/arrow/acero/hash_aggregate_test.cc:271 start_and_collect.MoveResult() ```
Is this because of the ARROW_EXTRA_ERROR_CONTEXT option? On Fri, Mar 24, 2023 at 12:04 PM Li Jin <ice.xell...@gmail.com> wrote: > Thanks David! > > On Tue, Mar 21, 2023 at 6:32 PM David Li <lidav...@apache.org> wrote: > >> Not really, other than: >> >> * By searching the codebase for relevant strings. >> * If you are building Arrow from source, you can use the option >> ARROW_EXTRA_ERROR_CONTEXT [1] when configuring. This will get you a rough >> stack trace (IIRC, if a function returns the status without using one of >> the macros, it won't add a line to the trace). >> >> [1]: >> https://github.com/apache/arrow/blob/1ba4425fab35d572132cb30eee6087a7dca89853/cpp/cmake_modules/DefineOptions.cmake#L608-L609 >> >> On Tue, Mar 21, 2023, at 18:12, Li Jin wrote: >> > Hi, >> > >> > This might be a dumb question but when Arrow code raises an invalid >> status, >> > I observe that it usually pops up to the user without stack >> information. I >> > wonder if there are any tricks to show where the invalid status is >> coming >> > from? >> > >> > Thanks, >> > Li >> >