On Mon, Mar 5, 2018 at 8:42 PM, Joao De Almeida Pereira < jdealmeidapere...@pivotal.io> wrote:
> Hello Khushboo, > Looks like we are almost doen, just missing one PEP-8 issue: > → pycodestyle --config=.pycodestyle pgadmin/tools > pgadmin/tools/sqleditor/tests/test_poll_query_tool.py:46: [E126] > continuation line over-indented for hanging indent > 1 E126 continuation line over-indented for hanging indent > 1 > > > Thanks Joao. Please find the attached updated patch. > The tests run successfully in our CI pipeline. > > Thanks > Joao > > Thanks, Khushboo > On Sun, Mar 4, 2018 at 11:51 PM Khushboo Vashi < > khushboo.va...@enterprisedb.com> wrote: > >> On Fri, Mar 2, 2018 at 6:55 PM, Dave Page <dp...@pgadmin.org> wrote: >> >>> Could you rebase this please? It no longer applies. >>> >>> Please find the attached updated patch. >> >>> Thanks. >>> >>> On Thu, Mar 1, 2018 at 5:56 AM, Khushboo Vashi < >>> khushboo.va...@enterprisedb.com> wrote: >>> >>>> Hi Joao, >>>> >>>> Thanks for reviewing. >>>> >>>> On Wed, Feb 28, 2018 at 8:55 PM, Joao De Almeida Pereira < >>>> jdealmeidapere...@pivotal.io> wrote: >>>> >>>>> Hello Khushboo, >>>>> After reviewing the patch I have the gut feeling that we do not have >>>>> enough test coverage on this issue, specially due to the intricate while >>>>> loop and conditions around the polling. >>>>> I think that this deserve Unit tests around it, When I say Unit Test I >>>>> am not talking about executing queries against the database, but do some >>>>> stubbing of the database so that we can control the flow that we want. >>>>> >>>> You are right. It needs more unit testing. I have checked below >>>> scenarios: >>>> 1. Returns 2 notices with data output >>>> 2. Returns 1000 notices with data output >>>> 3. No notices with data output >>>> >>>> By running above, I have checked, each time returned notices are >>>> accurate, no old notices are getting appended, it does not affect with the >>>> amount of messages (few, none or more). Also, with the updated patch, I >>>> have made sure that all these queries run with the single transaction id >>>> (same connection). >>>> >>>> So, please let me know if you think I can add more things to this. >>>> >>>>> >>>>> >>>> It is a temptation to try to always do a Feature Test to test what we >>>>> want because it is "easier" to write and ultimately it is what users see, >>>>> but while 1 Feature Test runs we can run 200 Unit Tests that give us much >>>>> more confidence that the code is doing what we expect it to do. >>>>> >>>>> Right, so added regression tests instead of feature tests. >>>> >>>> This being said, I run the tests on the CI Pipeline and all tests pass. >>>>> Running pycodestyle fails due to some line sizes on the >>>>> psycopg2/__init__py. I believe that it is not what you changed, but since >>>>> you were changing the file it can be fixed it is just: >>>>> >>>>> pgadmin/utils/driver/psycopg2/__init__.py:1276: [E501] line too long >>>>> (81 > 79 characters) >>>>> pgadmin/utils/driver/psycopg2/__init__.py:1277: [E501] line too long >>>>> (91 > 79 characters) >>>>> pgadmin/utils/driver/psycopg2/__init__.py:1282: [E501] line too long >>>>> (81 > 79 characters) >>>>> pgadmin/utils/driver/psycopg2/__init__.py:1283: [E501] line too long >>>>> (91 > 79 characters) >>>>> 4 E501 line too long (81 > 79 characters) >>>>> >>>>> Fixed. Thanks for pointing out. >>>> >>>>> >>>>> Thanks >>>>> Joao >>>>> >>>>> >>>>> On Wed, Feb 28, 2018 at 6:49 AM Khushboo Vashi < >>>>> khushboo.va...@enterprisedb.com> wrote: >>>>> >>>>>> On Mon, Feb 26, 2018 at 10:02 PM, Dave Page <dp...@pgadmin.org> >>>>>> wrote: >>>>>> >>>>>>> Argh, I ran some tests, but didn't spot any lost messages in the >>>>>>> tests I ran. I'll revert the patch. >>>>>>> >>>>>>> Khushboo; >>>>>>> >>>>>>> Please look at the following: >>>>>>> >>>>>>> - Fix the patch so it doesn't drop messages. >>>>>>> >>>>>> Fixed. >>>>>> By default, the notice attribute of the connection object of psycopg >>>>>> 2 only stores 50 notices. Once it reaches to 50 it starts from 1 again. >>>>>> To fix this I have changed the notice attribute from list to deque to >>>>>> append more messages. Currently I have kept the maximum limit at a time >>>>>> of >>>>>> the notice attribute is 100000 (in a single poll). >>>>>> >>>>>>> - Add regression tests to make sure it doesn't break in the future. >>>>>>> This may require creating one or more functions the spew out a whole >>>>>>> lot of >>>>>>> notices, and then running a couple of queries and checking the output. >>>>>>> >>>>>> Added. With this regression test, the current code is failing which >>>>>> has been taken care in this patch. >>>>>> >>>>>>> - Check the messages panel on the history tab. I just noticed it >>>>>>> seems to only be showing an even smaller subset of the messages. >>>>>>> >>>>>> Tested and no issues found. >>>>>> >>>>>>> >>>>>>> >>>>>> Thanks. >>>>>>> >>>>>>> On Mon, Feb 26, 2018 at 4:23 PM, Murtuza Zabuawala < >>>>>>> murtuza.zabuaw...@enterprisedb.com> wrote: >>>>>>> >>>>>>>> Sent bit early, >>>>>>>> >>>>>>>> You can run 'VACUUM FULL VERBOSE' in query tool and verify the >>>>>>>> populated messages (pgAdmin3 vs. pgAdmin4). >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 26, 2018 at 9:48 PM, Murtuza Zabuawala < >>>>>>>> murtuza.zabuaw...@enterprisedb.com> wrote: >>>>>>>> >>>>>>>>> Hi Khushboo/Dave, >>>>>>>>> >>>>>>>>> With given commit, I'm again seeing the issue raised in >>>>>>>>> https://redmine.postgresql.org/issues/1523 :( >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Regards, >>>>>>>>> Murtuza Zabuawala >>>>>>>>> EnterpriseDB: http://www.enterprisedb.com >>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 26, 2018 at 7:49 PM, Dave Page <dp...@pgadmin.org> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Ensure we pick up the messages from the current query and not a >>>>>>>>>> previous one. Fixes #3094 >>>>>>>>>> >>>>>>>>>> Branch >>>>>>>>>> ------ >>>>>>>>>> master >>>>>>>>>> >>>>>>>>>> Details >>>>>>>>>> ------- >>>>>>>>>> https://git.postgresql.org/gitweb?p=pgadmin4.git;a=commitdiff;h= >>>>>>>>>> 08b3ccc01a4d57e8ea3657f8882a53dcd1b99386 >>>>>>>>>> Author: Khushboo Vashi <khushboo.va...@enterprisedb.com> >>>>>>>>>> >>>>>>>>>> Modified Files >>>>>>>>>> -------------- >>>>>>>>>> web/pgadmin/utils/driver/abstract.py | 1 + >>>>>>>>>> web/pgadmin/utils/driver/psycopg2/__init__.py | 64 >>>>>>>>>> +++++++++------------------ >>>>>>>>>> 2 files changed, 21 insertions(+), 44 deletions(-) >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Dave Page >>>>>>> Blog: http://pgsnake.blogspot.com >>>>>>> Twitter: @pgsnake >>>>>>> >>>>>>> EnterpriseDB UK: http://www.enterprisedb.com >>>>>>> The Enterprise PostgreSQL Company >>>>>>> >>>>>> >>>> >>> >>> >>> -- >>> Dave Page >>> Blog: http://pgsnake.blogspot.com >>> Twitter: @pgsnake >>> >>> EnterpriseDB UK: http://www.enterprisedb.com >>> The Enterprise PostgreSQL Company >>> >>
diff --git a/web/pgadmin/feature_tests/keyboard_shortcut_test.py b/web/pgadmin/feature_tests/keyboard_shortcut_test.py index 443eff6..bbae194 100644 --- a/web/pgadmin/feature_tests/keyboard_shortcut_test.py +++ b/web/pgadmin/feature_tests/keyboard_shortcut_test.py @@ -62,7 +62,9 @@ class KeyboardShortcutFeatureTest(BaseFeatureTest): ).key_down( key_combo[2] ).key_up( - Keys.ALT + key_combo[0] + ).key_up( + key_combo[1] ).perform() print("Executing shortcut: " + self.new_shortcuts[s]['locator'] + diff --git a/web/pgadmin/tools/sqleditor/tests/test_poll_query_tool.py b/web/pgadmin/tools/sqleditor/tests/test_poll_query_tool.py new file mode 100644 index 0000000..275ed9c --- /dev/null +++ b/web/pgadmin/tools/sqleditor/tests/test_poll_query_tool.py @@ -0,0 +1,112 @@ +########################################################################## +# +# pgAdmin 4 - PostgreSQL Tools +# +# Copyright (C) 2013 - 2018, The pgAdmin Development Team +# This software is released under the PostgreSQL Licence +# +########################################################################## + +import json + +from pgadmin.browser.server_groups.servers.databases.tests import utils as \ + database_utils +from pgadmin.utils.route import BaseTestGenerator +from regression import parent_node_dict +from regression.python_test_utils import test_utils as utils + + +class TestPollQueryTool(BaseTestGenerator): + """ This class will test the query tool polling. """ + scenarios = [ + ('When query tool polling returns messages with result data-set', + dict( + sql=[ + """ +DROP TABLE IF EXISTS test_for_notices; + +DO $$ +BEGIN + RAISE NOTICE 'Hello, world!'; +END $$; + +SELECT 'CHECKING POLLING'; +""", + """ +DO $$ +BEGIN + FOR i in 1..1000 LOOP + RAISE NOTICE 'Count is %', i; + END LOOP; +END $$; + +SELECT 'CHECKING POLLING FOR LONG MESSAGES'; +""", + "SELECT 'CHECKING POLLING WITHOUT MESSAGES';" + ], + expected_message=['NOTICE: table "test_for_notices" ' + + """does not exist, skipping +NOTICE: Hello, world! +""", + "\n".join(["NOTICE: Count is {0}".format(i) + for i in range(1, 1001)]) + "\n", + None], + expected_result=['CHECKING POLLING', + 'CHECKING POLLING FOR LONG MESSAGES', + 'CHECKING POLLING WITHOUT MESSAGES'], + print_messages=['2 NOTICES WITH DATASET', + '1000 NOTICES WITH DATASET', + 'NO NOTICE WITH DATASET' + ] + )) + ] + + def runTest(self): + """ This function will check messages return by query tool polling. """ + database_info = parent_node_dict["database"][-1] + self.server_id = database_info["server_id"] + + self.db_id = database_info["db_id"] + db_con = database_utils.connect_database(self, + utils.SERVER_GROUP, + self.server_id, + self.db_id) + if not db_con["info"] == "Database connected.": + raise Exception("Could not connect to the database.") + + # Initialize query tool + url = '/datagrid/initialize/query_tool/{0}/{1}/{2}'.format( + utils.SERVER_GROUP, self.server_id, self.db_id) + response = self.tester.post(url) + self.assertEquals(response.status_code, 200) + + response_data = json.loads(response.data.decode('utf-8')) + self.trans_id = response_data['data']['gridTransId'] + + cnt = 0 + for s in self.sql: + print("Executing and polling with: " + self.print_messages[cnt]) + # Start query tool transaction + url = '/sqleditor/query_tool/start/{0}'.format(self.trans_id) + response = self.tester.post(url, data=json.dumps({"sql": s}), + content_type='html/json') + + self.assertEquals(response.status_code, 200) + + # Query tool polling + url = '/sqleditor/poll/{0}'.format(self.trans_id) + response = self.tester.get(url) + self.assertEquals(response.status_code, 200) + response_data = json.loads(response.data.decode('utf-8')) + + # Check the returned messages + self.assertEquals(self.expected_message[cnt], + response_data['data']['additional_messages']) + # Check the output + self.assertEquals(self.expected_result[cnt], + response_data['data']['result'][0][0]) + + cnt += 1 + + # Disconnect the database + database_utils.disconnect_database(self, self.server_id, self.db_id) diff --git a/web/pgadmin/utils/driver/abstract.py b/web/pgadmin/utils/driver/abstract.py index 32e1c97..271bfec 100644 --- a/web/pgadmin/utils/driver/abstract.py +++ b/web/pgadmin/utils/driver/abstract.py @@ -168,6 +168,8 @@ class BaseConnection(object): ASYNC_WRITE_TIMEOUT = 3 ASYNC_NOT_CONNECTED = 4 ASYNC_EXECUTION_ABORTED = 5 + ASYNC_TIMEOUT = 0.2 + ASYNC_NOTICE_MAXLENGTH = 100000 @abstractmethod def connect(self, **kwargs): diff --git a/web/pgadmin/utils/driver/psycopg2/__init__.py b/web/pgadmin/utils/driver/psycopg2/__init__.py index 81442e4..941a694 100644 --- a/web/pgadmin/utils/driver/psycopg2/__init__.py +++ b/web/pgadmin/utils/driver/psycopg2/__init__.py @@ -37,6 +37,7 @@ from .cursor import DictCursor from .typecast import register_global_typecasters, \ register_string_typecasters, register_binary_typecasters, \ register_array_to_string_typecasters, ALL_JSON_TYPES +from collections import deque if sys.version_info < (3,): @@ -110,7 +111,7 @@ class Connection(BaseConnection): - This method is used to wait for asynchronous connection. This is a blocking call. - * _wait_timeout(conn, time) + * _wait_timeout(conn) - This method is used to wait for asynchronous connection with timeout. This is a non blocking call. @@ -310,6 +311,9 @@ class Connection(BaseConnection): ) return False, msg + # Overwrite connection notice attr to support + # more than 50 notices at a time + pg_conn.notices = deque([], self.ASYNC_NOTICE_MAXLENGTH) self.conn = pg_conn self.wasConnected = True try: @@ -1208,6 +1212,7 @@ Failed to reset the connection to the server due to following error: ) return False, msg + pg_conn.notices = deque([], self.ASYNC_NOTICE_MAXLENGTH) self.conn = pg_conn self.__backend_pid = pg_conn.get_backend_pid() @@ -1261,51 +1266,31 @@ Failed to reset the connection to the server due to following error: Args: conn: connection object - time: wait time """ - - state = conn.poll() - if state == psycopg2.extensions.POLL_OK: - return self.ASYNC_OK - elif state == psycopg2.extensions.POLL_WRITE: - # Wait for the given time and then check the return status - # If three empty lists are returned then the time-out is reached. - timeout_status = select.select([], [conn.fileno()], [], 0) - if timeout_status == ([], [], []): - return self.ASYNC_WRITE_TIMEOUT - - # poll again to check the state if it is still POLL_WRITE - # then return ASYNC_WRITE_TIMEOUT else return ASYNC_OK. + while 1: state = conn.poll() - if state == psycopg2.extensions.POLL_WRITE: - return self.ASYNC_WRITE_TIMEOUT - return self.ASYNC_OK - elif state == psycopg2.extensions.POLL_READ: - # Wait for the given time and then check the return status - # If three empty lists are returned then the time-out is reached. - timeout_status = select.select([conn.fileno()], [], [], 0) - if timeout_status == ([], [], []): - return self.ASYNC_READ_TIMEOUT - - # select.select timeout option works only if we provide - # empty [] [] [] file descriptor in select.select() function - # and that also works only on UNIX based system, it do not support - # Windows Hence we have wrote our own pooling mechanism to read - # data fast each call conn.poll() reads chunks of data from - # connection object more we poll more we read data from connection - cnt = 0 - while cnt < 1000: - # poll again to check the state if it is still POLL_READ - # then return ASYNC_READ_TIMEOUT else return ASYNC_OK. - state = conn.poll() - if state == psycopg2.extensions.POLL_OK: - return self.ASYNC_OK - cnt += 1 - return self.ASYNC_READ_TIMEOUT - else: - raise psycopg2.OperationalError( - "poll() returned %s from _wait_timeout function" % state - ) + if state == psycopg2.extensions.POLL_OK: + return self.ASYNC_OK + elif state == psycopg2.extensions.POLL_WRITE: + # Wait for the given time and then check the return status + # If three empty lists are returned then the time-out is + # reached. + timeout_status = select.select([], [conn.fileno()], [], + self.ASYNC_TIMEOUT) + if timeout_status == ([], [], []): + return self.ASYNC_WRITE_TIMEOUT + elif state == psycopg2.extensions.POLL_READ: + # Wait for the given time and then check the return status + # If three empty lists are returned then the time-out is + # reached. + timeout_status = select.select([conn.fileno()], [], [], + self.ASYNC_TIMEOUT) + if timeout_status == ([], [], []): + return self.ASYNC_READ_TIMEOUT + else: + raise psycopg2.OperationalError( + "poll() returned %s from _wait_timeout function" % state + ) def poll(self, formatted_exception_msg=False, no_result=False): """ @@ -1347,8 +1332,8 @@ Failed to reset the connection to the server due to following error: is_error = True if self.conn.notices and self.__notices is not None: - while self.conn.notices: - self.__notices.append(self.conn.notices.pop(0)[:]) + self.__notices.extend(self.conn.notices) + self.conn.notices.clear() # We also need to fetch notices before we return from function in case # of any Exception, To avoid code duplication we will return after