Copilot commented on code in PR #38430:
URL: https://github.com/apache/superset/pull/38430#discussion_r2888065302
##########
superset-frontend/src/explore/components/controls/DatasourceControl/DatasourceControl.test.tsx:
##########
@@ -42,8 +54,17 @@ beforeEach(() => {
afterEach(() => {
window.location = originalLocation;
+
+ const unmatched = fetchMock.callHistory.calls('unmatched');
+ if (unmatched.length > 0) {
+ const urls = unmatched.map(call => call.url).join(', ');
+ throw new Error(
+ `fetchMock: ${unmatched.length} unmatched call(s): ${urls}`,
Review Comment:
If there are unmatched fetch-mock calls, this `afterEach` throws before
calling `fetchMock.clearHistory().removeRoutes()` and `jest.restoreAllMocks()`.
Jest will still continue running later tests in this file, so leaving
routes/history/mocks in place can cause cascading failures that obscure the
original unmatched-call error. Consider doing cleanup in a `finally` block (or
clearing routes/history before throwing) so test isolation is preserved even
when this guard fails.
##########
superset-frontend/src/explore/components/controls/DatasourceControl/DatasourceControl.test.tsx:
##########
@@ -570,235 +578,87 @@ test('should show forbidden dataset state', () => {
expect(screen.getByText(error.statusText)).toBeVisible();
});
-test('should allow creating new metrics in dataset editor', async () => {
- const newMetricName = `test_metric_${Date.now()}`;
- const mockDatasourceWithMetrics = {
- ...mockDatasource,
- metrics: [],
- };
-
+test('should fire onDatasourceSave when saving with new metrics', async () => {
Review Comment:
This test name suggests the test is creating new metrics via the editor UI,
but the test actually just stubs the dataset GET response returned after save
and asserts the callback receives that object. Consider renaming the test to
reflect what it verifies (callback propagation), or adjust the mocked
DatasourceEditor to drive `onChange` so the save flow reflects an actual metric
addition.
```suggestion
test('should fire onDatasourceSave with datasource containing new metrics',
async () => {
```
##########
superset-frontend/src/explore/components/controls/DatasourceControl/DatasourceControl.test.tsx:
##########
@@ -570,235 +578,87 @@ test('should show forbidden dataset state', () => {
expect(screen.getByText(error.statusText)).toBeVisible();
});
-test('should allow creating new metrics in dataset editor', async () => {
- const newMetricName = `test_metric_${Date.now()}`;
- const mockDatasourceWithMetrics = {
- ...mockDatasource,
- metrics: [],
- };
-
+test('should fire onDatasourceSave when saving with new metrics', async () => {
const props = createProps({
- datasource: mockDatasourceWithMetrics,
- });
-
- // Mock API calls for dataset editor
- fetchMock.get(getDbWithQuery, { response: { result: [] } });
-
- fetchMock.get(getDatasetWithAll, { result: mockDatasourceWithMetrics });
-
- fetchMock.put(putDatasetWithAll, {
- result: {
- ...mockDatasourceWithMetrics,
- metrics: [{ id: 1, metric_name: newMetricName }],
- },
+ datasource: { ...mockDatasource, metrics: [] },
});
- SupersetClientGet.mockImplementationOnce(
- async () => ({ json: { result: [] } }) as any,
- );
-
render(<DatasourceControl {...props} />, {
useRedux: true,
useRouter: true,
});
- // Open datasource menu and click edit dataset
- userEvent.click(screen.getByTestId('datasource-menu-trigger'));
- userEvent.click(await screen.findByTestId('edit-dataset'));
-
- // Wait for modal to appear and navigate to Metrics tab
- await waitFor(() => {
- expect(screen.getByText('Metrics')).toBeInTheDocument();
+ await openAndSaveChanges({
+ ...mockDatasource,
+ metrics: [{ id: 1, metric_name: 'test_metric' }],
});
- userEvent.click(screen.getByText('Metrics'));
-
- // Click add new metric button
- const addButton = await screen.findByTestId('crud-add-table-item');
- userEvent.click(addButton);
-
- // Find and fill in the metric name
- const nameInput = await screen.findByTestId('textarea-editable-title-input');
- userEvent.clear(nameInput);
- userEvent.type(nameInput, newMetricName);
-
- // Save the modal
- userEvent.click(screen.getByTestId('datasource-modal-save'));
-
- // Confirm the save
- const okButton = await screen.findByText('OK');
- userEvent.click(okButton);
-
- // Verify the onDatasourceSave callback was called
await waitFor(() => {
- expect(props.onDatasourceSave).toHaveBeenCalled();
+ expect(props.onDatasourceSave).toHaveBeenCalledWith(
+ expect.objectContaining({
+ metrics: [{ id: 1, metric_name: 'test_metric' }],
+ }),
+ );
});
});
-test('should allow deleting metrics in dataset editor', async () => {
- const existingMetricName = 'existing_metric';
- const mockDatasourceWithMetrics = {
- ...mockDatasource,
- metrics: [{ id: 1, metric_name: existingMetricName }],
- };
-
+test('should fire onDatasourceSave when saving with removed metrics', async ()
=> {
Review Comment:
This test name implies metrics were removed through the editor, but the test
currently only stubs the post-save dataset GET response with `metrics: []` and
checks the callback argument. Consider renaming to describe the actual behavior
under test, or have the DatasourceEditor mock invoke `onChange` to simulate
removing a metric before saving.
```suggestion
test('should fire onDatasourceSave when saving with empty metrics', async ()
=> {
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]