pgAdmin 4 commit: Use test methods with @patch rather than directly.

2018-03-09 Thread Dave Page
Use test methods with @patch rather than directly.

Branch
--
master

Details
---
https://git.postgresql.org/gitweb?p=pgadmin4.git;a=commitdiff;h=83477224cb52415859f387cc6f8738c880bdbd9b
Author: Joao Pedro De Almeida Pereira 

Modified Files
--
.../utils/tests/test_start_running_query.py| 47 --
1 file changed, 16 insertions(+), 31 deletions(-)



Re: pgAdmin 4 commit: Support EXPLAIN on Greenplum. Fixes #3097

2018-03-09 Thread Dave Page
On Thu, Mar 8, 2018 at 2:22 PM, Joao De Almeida Pereira <
jdealmeidapere...@pivotal.io> wrote:

> Hello Khushboo,
> Completely forgot about this python "feature"...
> Attached is the fix.
>

Thanks, applied.


>
> Just as a side question, does anyone else feel the pain of wanting to run
> a single test using a IDE or the command line and not being able to?
>

Not really - the Python and JS tests are so quick I don't really care (and
with the Python ones, I can execute for a single module for even more
speed).

What I would *really* like, is the ability to run individual feature tests.
That would be very valuable and save me a ton of time.



> We an HandRolled the loader, and that as some implications. Did anyone try
> to use a different launcher like pytest or nose instead of the current
> runner?
> I understand that testscenarios is one of the problems we have if we want
> to move away from this way of running tests.
> Any suggestion?
>
> Thanks
> Joao
>
> On Wed, Mar 7, 2018 at 11:41 PM Khushboo Vashi <
> khushboo.va...@enterprisedb.com> wrote:
>
>> Hi Joao,
>>
>> In the test_start_running_query.py, 2 static methods
>> (is_begin_required_for_sql_query and is_rollback_statement_required)
>> of StartRunningQuery class were used directly without @patch. Due to
>> this, in all the cases, the original value of them doesn't restore.
>>
>> To fix this, I have sent the patch in another thread, to restore its
>> original state, but I wonder if we can use these methods with @patch.
>>
>> Thanks,
>> Khushboo
>>
>>
>> On Fri, Feb 9, 2018 at 5:24 PM, Dave Page  wrote:
>>
>>> Support EXPLAIN on Greenplum. Fixes #3097
>>>
>>>  - Extract SQLEditor.execute and SQLEditor._poll into their own files
>>> and add test around them
>>>  - Extract SQLEditor backend functions that start executing query to
>>> their own files and add tests around it
>>>  - Move the Explain SQL from the front-end and now pass the Explain plan
>>> parameters as a JSON object in the start query call.
>>>  - Extract the compile_template_name into a function that can be used by
>>> the different places that try to select the version of the template and the
>>> server type
>>>
>>> Branch
>>> --
>>> master
>>>
>>> Details
>>> ---
>>> https://git.postgresql.org/gitweb?p=pgadmin4.git;a=commitdiff;h=
>>> e16a95275336529a734bf0066889e39cc8ef0662
>>> Author: Joao Pedro De Almeida Pereira 
>>>
>>> Modified Files
>>> --
>>> .../databases/schemas/tables/tests/test_utils.py   |0
>>> web/pgadmin/static/js/sqleditor/execute_query.js   |  287 
>>> .../js/sqleditor/is_new_transaction_required.js|   14 +
>>> .../static/js/sqleditor/query_tool_actions.js  |   33 +-
>>> web/pgadmin/tools/sqleditor/__init__.py|  396 +
>>> web/pgadmin/tools/sqleditor/static/js/sqleditor.js |  227 +--
>>> .../sqleditor/sql/10_plus/explain_plan.sql |   23 +
>>> .../sqleditor/sql/9.2_plus/explain_plan.sql|   20 +
>>> .../sqleditor/sql/default/explain_plan.sql |   17 +
>>> .../sqleditor/sql/gpdb_5.0_plus/explain_plan.sql   |5 +
>>> web/pgadmin/tools/sqleditor/tests/__init__.py  |8 +
>>> .../sqleditor/tests/test_explain_plan_templates.py |  152 ++
>>> .../test_extract_sql_from_network_parameters.py|   59 +
>>> .../tools/sqleditor/tests/test_start_query_tool.py |   38 +
>>> web/pgadmin/tools/sqleditor/utils/__init__.py  |   14 +
>>> .../sqleditor/utils/apply_explain_plan_wrapper.py  |   24 +
>>> .../tools/sqleditor/utils/constant_definition.py   |   32 +
>>> .../tools/sqleditor/utils/is_begin_required.py |  169 ++
>>> .../tools/sqleditor/utils/start_running_query.py   |  172 ++
>>> .../tools/sqleditor/utils/tests/__init__.py|8 +
>>> .../utils/tests/test_apply_explain_plan_wrapper.py |  121 ++
>>> .../utils/tests/test_start_running_query.py|  445 +
>>> .../utils/update_session_grid_transaction.py   |   18 +
>>> web/pgadmin/utils/compile_template_name.py |   17 +
>>> .../utils/tests/test_compile_template_name.py  |   34 +
>>> web/pgadmin/utils/versioned_template_loader.py |2 +-
>>> web/regression/javascript/fake_endpoints.js|6 +-
>>> .../javascript/sqleditor/execute_query_spec.js | 1702
>>> 
>>> .../sqleditor/is_new_transaction_required_spec.js  |   65 +
>>> .../sqleditor/query_tool_actions_spec.js   |  141 +-
>>> 30 files changed, 3670 insertions(+), 579 deletions(-)
>>>
>>>
>>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgAdmin 4 commit: Fix SQL generated when dropping NOT NULL from a "char

2018-03-09 Thread Dave Page
Fix SQL generated when dropping NOT NULL from a "char" column. Fixes #2989

Branch
--
master

Details
---
https://git.postgresql.org/gitweb?p=pgadmin4.git;a=commitdiff;h=985a004766e8ec8a99e843d28e86f0c89d06ea80
Author: Murtuza Zabuawala 

Modified Files
--
.../databases/schemas/tables/column/__init__.py| 17 +---
.../tables/tests/test_table_column_update.py   | 96 ++
.../databases/schemas/tables/tests/utils.py| 13 ++-
.../servers/databases/schemas/tables/utils.py  | 35 ++--
.../servers/databases/schemas/types/__init__.py| 27 +++---
.../servers/databases/schemas/utils.py | 52 +++-
6 files changed, 178 insertions(+), 62 deletions(-)



Re: [pgAdmin4][RM#2989] To fix the issue in Table node

2018-03-09 Thread Dave Page
Thanks, patch applied.

On Thu, Mar 8, 2018 at 6:00 PM, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:

> Thank you Joao
>
> Regards,
> Murtuza
>
>
> On Thu, Mar 8, 2018 at 10:19 PM, Joao De Almeida Pereira <
> jdealmeidapere...@pivotal.io> wrote:
>
>> Hello Murtuza/Dave,
>>
>> Nice splitting of some of the functionality into functions, removing some
>> of the complexity of the initial function. Good job.
>>
>> I made some changes because the linter was failing and also changed some
>> variable names.
>> These changes pass our CI and the linter.
>>
>> Thanks
>> Joao
>>
>> On Thu, Mar 8, 2018 at 8:13 AM Murtuza Zabuawala <
>> murtuza.zabuaw...@enterprisedb.com> wrote:
>>
>>> Hi Dave,
>>>
>>> Please find updated patch.
>>>
>>> --
>>> Regards,
>>> Murtuza Zabuawala
>>> EnterpriseDB: http://www.enterprisedb.com
>>> The Enterprise PostgreSQL Company
>>>
>>>
>>> On Thu, Mar 8, 2018 at 6:10 PM, Dave Page  wrote:
>>>
 Can you rebase this please?

 Thanks.

 On Thu, Mar 8, 2018 at 9:00 AM, Murtuza Zabuawala <
 murtuza.zabuaw...@enterprisedb.com> wrote:

> Hi Dave,
>
> Please find updated patch & updated test case to cover that as well.
>
>
>
> --
> Regards,
> Murtuza Zabuawala
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> On Wed, Mar 7, 2018 at 9:59 PM, Dave Page  wrote:
>
>> Hi
>>
>> On Wed, Mar 7, 2018 at 2:59 PM, Murtuza Zabuawala <
>> murtuza.zabuaw...@enterprisedb.com> wrote:
>>
>>> Hi Dave,
>>>
>>> PFA updated patch.
>>>
>>>
>> Using your example on the ticket, I added a "character varying (32)"
>> column with NOT NULL to the table. When I then edit the column, and turn
>> off NOT NULL (making no other changes), the SQL generated is:
>>
>> ALTER TABLE public.test_drop
>> ALTER COLUMN col2 TYPE character varying (32) COLLATE
>> pg_catalog."default";
>> ALTER TABLE public.test_drop
>> ALTER COLUMN col2 DROP NOT NULL;
>>
>> I didn't see that when turning off NOT NULL for col1.
>>
>> --
>> Dave Page
>> Blog: http://pgsnake.blogspot.com
>> Twitter: @pgsnake
>>
>> EnterpriseDB UK: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>
>


 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

>>>
>>>
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgAdmin4][Patch]: RM #2963 - Backup database, Restore database and Maintenance Database failed for é object.

2018-03-09 Thread Dave Page
Hi

On Fri, Mar 9, 2018 at 3:54 AM, Khushboo Vashi <
khushboo.va...@enterprisedb.com> wrote:

> Hi,
>
> Please find the attached patch to fix below issues:
>
> 1. #2963 - Backup database, Restore database and Maintenance Database
> failed for é object
> 2. #3157 - Process viewer doesn't show complete command executed.
>
> Test cases are not included for these fixes as we don't have test cases
> for these modules (backup, restore, maintenance).
> I will create one separate RM for the same which will cover this.
>

Interesting that you fix these together, as together they also exhibit
another bug :-). Backing up the é database displays the following command:

/usr/local/pgsql/bin/pg_dump --file "/Users/dpage/foo.bak" --host
"localhost" --port "5432" --username "postgres" --no-password --verbose
--format=c --blobs "é"

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgAdmin4][Patch]: RM #2963 - Backup database, Restore database and Maintenance Database failed for é object.

2018-03-09 Thread Dave Page
Hi

On Fri, Mar 9, 2018 at 3:32 PM, Dave Page  wrote:

> Hi
>
> On Fri, Mar 9, 2018 at 3:54 AM, Khushboo Vashi <
> khushboo.va...@enterprisedb.com> wrote:
>
>> Hi,
>>
>> Please find the attached patch to fix below issues:
>>
>> 1. #2963 - Backup database, Restore database and Maintenance Database
>> failed for é object
>> 2. #3157 - Process viewer doesn't show complete command executed.
>>
>> Test cases are not included for these fixes as we don't have test cases
>> for these modules (backup, restore, maintenance).
>> I will create one separate RM for the same which will cover this.
>>
>
> Interesting that you fix these together, as together they also exhibit
> another bug :-). Backing up the é database displays the following command:
>
> /usr/local/pgsql/bin/pg_dump --file "/Users/dpage/foo.bak" --host
> "localhost" --port "5432" --username "postgres" --no-password --verbose
> --format=c --blobs "é"
>

Also, what tests can we add for backup/restore? We have nothing at all at
the moment, and it is pretty troublesome. I'd like to ensure that we can
backup and restore a database correctly, and ensure that the displayed
commands are what we expect and that we get valid output from
pg_dump/pg_restore (though, it may change from PG version to PG version, so
maybe we should just check for something small and generic). I guess this
might need some config parameters for the tests to specify the pg_* utility
paths for each server.

I'd suggest maybe having a feature test that opens the prefs, sets the
appropriate path, then runs a backup, waits for it to finish, checks the
process monitor output, then restores the same backup to a new database,
checking the process monitor output again, and then checking that the
restored database contains at least one object from the original database
(we don't need to check all of pg_dump/pg_restore, just that something
expected was restored). We should use a (partial) database name and backup
filename from the advanced test config file, and I think both should
default to some interesting non-ASCII strings to ensure quoting works.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: pgAdmin 4 commit: Support EXPLAIN on Greenplum. Fixes #3097

2018-03-09 Thread Joao De Almeida Pereira
Hello,
Definitely running single tests is something that would be great, specially
if you are TDDing something waiting 30-40 seconds to get feedback is a
little cumbersome when the test you are concerned with take less then a
second.

In the process:
1. Write a test
2. Make the test pass
3. Refactor

in between each step you run the test more then 1 time, and depending on
the refactoring you might need to run it several times. So imagine waiting
30 seconds per run to get results. To run a subset of tests is a pain
because you need to be always changing the way you run the tests.

I believe we could archive a better granularity and choosing what test to
run if we used a runner like pytest or nose to do it. What was the reason
behind handrolling a test runner script? I am asking this because in a
previous job I decided to handroll a unittest loader script and that was
something that I regretted every time I had to touch it, and eventually was
in the process of changing it to pytest.

I looked into pytest to replace the current the current runtest, and the
major problem I found was the testscenarios integration(See Note 1). It can
be done but we would need to change all the test functions to receive the
scenario variables through arguments on the function. Also didn't dug much
into setting all the variables that we need there and all the environment.
The other issue that I do not like very much about pytest is the fact that
you loose the unittest assertion that is not so bad because there are some
neat libraries like: https://github.com/grappa-py/grappa,
https://github.com/ActivisionGameScience/assertpy,
https://github.com/dgilland/verify. Personally I really like the syntax of
Grapa, but the Veridfy one is pretty similar to Jasmine too.

What are your thoughts?



Note 1: As an example of what our functions would have to look like you can
see:
https://github.com/OriMenashe/pytest-scenario/blob/master/tests/test_parametrize.py
As a example this class:

class ServersWithServiceIDAddTestCase(BaseTestGenerator):
""" This class will add the servers under default server group. """

scenarios = [
# Fetch the default url for server object
(
'Default Server Node url', dict(
url='/browser/server/obj/'
)
)
]

def setUp(self):
pass

def runTest(self):
""" This function will add the server under default server group."""
url = "{0}{1}/".format(self.url, utils.SERVER_GROUP)
# Add service name in the config
self.server['service'] = "TestDB"
response = self.tester.post(
url,
data=json.dumps(self.server),
content_type='html/json'
)
self.assertEquals(response.status_code, 200)
response_data = json.loads(response.data.decode('utf-8'))
self.server_id = response_data['node']['_id']

def tearDown(self):
"""This function delete the server from SQLite """
utils.delete_server_with_api(self.tester, self.server_id)

Would have to look changed to:

class ServersWithServiceIDAddTestCase(object):
""" This class will add the servers under default server group. """

scenarios = [
# Fetch the default url for server object
(
'Default Server Node url', dict(
url='/browser/server/obj/'
)
)
]

def setUp(self):
pass

def runTest(self, url):
""" This function will add the server under default server group."""
url = "{0}{1}/".format(url, utils.SERVER_GROUP)
# Add service name in the config
self.server['service'] = "TestDB"
response = self.tester.post(
url,
data=json.dumps(self.server),
content_type='html/json'
)
self.assertEquals(response.status_code, 200)
response_data = json.loads(response.data.decode('utf-8'))
self.server_id = response_data['node']['_id']

def tearDown(self):
"""This function delete the server from SQLite """
utils.delete_server_with_api(self.tester, self.server_id)



Thanks
Joao

On Fri, Mar 9, 2018 at 8:31 AM Dave Page  wrote:

> On Thu, Mar 8, 2018 at 2:22 PM, Joao De Almeida Pereira <
> jdealmeidapere...@pivotal.io> wrote:
>
>> Hello Khushboo,
>> Completely forgot about this python "feature"...
>> Attached is the fix.
>>
>
> Thanks, applied.
>
>
>>
>> Just as a side question, does anyone else feel the pain of wanting to run
>> a single test using a IDE or the command line and not being able to?
>>
>
> Not really - the Python and JS tests are so quick I don't really care (and
> with the Python ones, I can execute for a single module for even more
> speed).
>
> What I would *really* like, is the ability to run individual feature
> tests. That would be very valuable and save me a ton of time.
>
>
>
>> We an HandRolled the loader, and that as some implications. Did anyone
>> try to use a 

Re: [pgAdmin4][RM#3140] Add service parameter

2018-03-09 Thread Dave Page
HI

On Fri, Mar 9, 2018 at 11:47 AM, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:

> Hi,
>
> PFA patch to add service parameter in server dialog.
> - Docs updated
> - Test case added for Service ID parameter
>
> Please note,
> I have extracted Connection class and Server manager class from our own
> custom Psycopg2 driver module.
>
> Patch also covers RM#3120
>

 This patch seems a little confused. The "Service" and "Service ID" fields
from pgAdmin 3 are very different things. The Redmine ticket seems to be
asking for the Service field (the pg_service.conf service name), *not*
Service ID (the operating system's service ID, used to start/stop the
database server service).

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgAdmin4][RM#3140] Add service parameter

2018-03-09 Thread Murtuza Zabuawala
Hi Dave,

I'll change the name and send you updated patch.


On Fri, Mar 9, 2018 at 9:25 PM, Dave Page  wrote:

> HI
>
> On Fri, Mar 9, 2018 at 11:47 AM, Murtuza Zabuawala  enterprisedb.com> wrote:
>
>> Hi,
>>
>> PFA patch to add service parameter in server dialog.
>> - Docs updated
>> - Test case added for Service ID parameter
>>
>> Please note,
>> I have extracted Connection class and Server manager class from our own
>> custom Psycopg2 driver module.
>>
>> Patch also covers RM#3120
>>
>
>  This patch seems a little confused. The "Service" and "Service ID" fields
> from pgAdmin 3 are very different things. The Redmine ticket seems to be
> asking for the Service field (the pg_service.conf service name), *not*
> Service ID (the operating system's service ID, used to start/stop the
> database server service).
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: pgAdmin 4 commit: Support EXPLAIN on Greenplum. Fixes #3097

2018-03-09 Thread Dave Page
Hi

On Fri, Mar 9, 2018 at 3:54 PM, Joao De Almeida Pereira <
jdealmeidapere...@pivotal.io> wrote:

> Hello,
> Definitely running single tests is something that would be great,
> specially if you are TDDing something waiting 30-40 seconds to get feedback
> is a little cumbersome when the test you are concerned with take less then
> a second.
>
> In the process:
> 1. Write a test
> 2. Make the test pass
> 3. Refactor
>

Sure, makes sense for development. As I spend 99% of my time reviewing and
testing these days, I was just relaying my pain points :-)


>
> in between each step you run the test more then 1 time, and depending on
> the refactoring you might need to run it several times. So imagine waiting
> 30 seconds per run to get results. To run a subset of tests is a pain
> because you need to be always changing the way you run the tests.
>
> I believe we could archive a better granularity and choosing what test to
> run if we used a runner like pytest or nose to do it. What was the reason
> behind handrolling a test runner script? I am asking this because in a
> previous job I decided to handroll a unittest loader script and that was
> something that I regretted every time I had to touch it, and eventually was
> in the process of changing it to pytest.
>

Pure newbie-ism. I have no objections to changing to something else, if it
reduces our tech debt.


>
> I looked into pytest to replace the current the current runtest, and the
> major problem I found was the testscenarios integration(See Note 1). It can
> be done but we would need to change all the test functions to receive the
> scenario variables through arguments on the function. Also didn't dug much
> into setting all the variables that we need there and all the environment.
> The other issue that I do not like very much about pytest is the fact that
> you loose the unittest assertion that is not so bad because there are some
> neat libraries like: https://github.com/grappa-py/grappa, https://
> github.com/ActivisionGameScience/assertpy, https://github.com/dgilland/
> verify. Personally I really like the syntax of Grapa, but the Veridfy one
> is pretty similar to Jasmine too.
>
> What are your thoughts?
>

Huh, I also really like the grappa syntax. It's nice and readable.


>
>
>
> Note 1: As an example of what our functions would have to look like you
> can see: https://github.com/OriMenashe/pytest-scenario/
> blob/master/tests/test_parametrize.py
> As a example this class:
>

Without a diff, it's hard to be sure, but it looks like the only change was
BaseTestGenerator to object on the first line?


> class ServersWithServiceIDAddTestCase(BaseTestGenerator):
> """ This class will add the servers under default server group. """
>
> scenarios = [
> # Fetch the default url for server object
> (
> 'Default Server Node url', dict(
> url='/browser/server/obj/'
> )
> )
> ]
>
> def setUp(self):
> pass
>
> def runTest(self):
> """ This function will add the server under default server group."""
> url = "{0}{1}/".format(self.url, utils.SERVER_GROUP)
> # Add service name in the config
> self.server['service'] = "TestDB"
> response = self.tester.post(
> url,
> data=json.dumps(self.server),
> content_type='html/json'
> )
> self.assertEquals(response.status_code, 200)
> response_data = json.loads(response.data.decode('utf-8'))
> self.server_id = response_data['node']['_id']
>
> def tearDown(self):
> """This function delete the server from SQLite """
> utils.delete_server_with_api(self.tester, self.server_id)
>
> Would have to look changed to:
>
> class ServersWithServiceIDAddTestCase(object):
> """ This class will add the servers under default server group. """
>
> scenarios = [
> # Fetch the default url for server object
> (
> 'Default Server Node url', dict(
> url='/browser/server/obj/'
> )
> )
> ]
>
> def setUp(self):
> pass
>
> def runTest(self, url):
> """ This function will add the server under default server group."""
> url = "{0}{1}/".format(url, utils.SERVER_GROUP)
> # Add service name in the config
> self.server['service'] = "TestDB"
> response = self.tester.post(
> url,
> data=json.dumps(self.server),
> content_type='html/json'
> )
> self.assertEquals(response.status_code, 200)
> response_data = json.loads(response.data.decode('utf-8'))
> self.server_id = response_data['node']['_id']
>
> def tearDown(self):
> """This function delete the server from SQLite """
> utils.delete_server_with_api(self.tester, self.server_id)
>
>
>
> Thanks
> Joao
>
> On Fri, Mar 9, 2018 at 8:31 AM Dave Page  wrote:
>
>> On Thu, Mar 8, 20

Build failed in Jenkins: pgadmin4-master-python27-feature #14

2018-03-09 Thread pgAdmin 4 Jenkins
See 


Changes:

[Dave Page] Rewrite the runtime as a tray-based server which can launch a web

[Dave Page] Bump the version to 3.0, per discussion on the mailing lists

[Dave Page] Fix quoting of function names in RE-SQL. Fixes #3060

[Dave Page] Update JS packages:

[Dave Page] Support EXPLAIN on Greenplum. Fixes #3097

[Dave Page] Add configurable shortcut keys for various common options in the 
main

[Dave Page] Configurable shortcuts in the Debugger. Fixes #2901

[Dave Page] Fix PEP-8 issues in feature_tests, dashboard, about and misc 
module's

[Dave Page] Fix tests for Python 3.x

[Dave Page] Fix target name

[Dave Page] Fix Python 2.6 support.

[Dave Page] Fix creation of tables and columns in GPDB. Fixes #3099

[Dave Page] Un-vendorise React, now that it contains bug fixes we need.

[Dave Page] Show button shortcut keys in the debugger on tooltips.

[Dave Page] Fix PEP8 issues in various modules. Fixes #3121

[Dave Page] Add a marker (/*pga4dash*/) to the dashboard queries to allow them 
to be

[Dave Page] Ensure we can properly update rows with upper-case primary key 
columns.

[Dave Page] Add missing reverse-engineered SQL header and drop statement for

[Dave Page] Python 3 fix for the runtime.

[Dave Page] Fix stupid thinko

[Dave Page] Attempt to ensure the runtime is built with the correct Python 
version.

[Dave Page] Fix silly typo

[Dave Page] Ensure column names on indexes on views are properly quoted in 
RE-SQL.

[Dave Page] Fix alignment issues in keyboard shortcut options. Fixes #3080

[Dave Page] Fix intermittent specified_version_number ValueError issue on 
restart.

[Dave Page] Clarify which shortcut is being tested in the shortcut test.

[Dave Page] Hide tablespace node on GPDB. Fixes #3107

[Dave Page] Don't depend on standards_conforming_strings being enabled. Fixes 
#3077

[Dave Page] Fix validation of sequence parameters. Fixes #3014

[Dave Page] Fix tablespace tests for Python 3.x. Fixes #3138

[Dave Page] PEP8 cleanups for the sequences module.

[Dave Page] Don't use the webpack cache with production builds.

[Dave Page] PEP8 fixes for the tools module.

[Dave Page] Add a test for sequence validation.

[Dave Page] Allow dashboard tables and charts to be enabled/disabled. Fixes 
#2951

[Dave Page] Fix table statistics for Greenplum. Fixes #3059

[Dave Page] PEP8 fixes.

[Dave Page] Ensure we pick up the messages from the current query and not a 
previous

[Dave Page] Update dashboard display options screenshots.

[Dave Page] PEP8 fixes.

[Dave Page] Revert "Ensure we pick up the messages from the current query and 
not a

[Dave Page] PEP8 fixes for the pgAgent and Tables nodes (and subnodes). Fixes 
#3148

[Dave Page] Case sensitive paths are confusing git...

[Dave Page] Fix function reserve SQL for GPDB. Fixes #3150

[Dave Page] Support tab navigation in dialogs. Fixes #2898

[Dave Page] PEP8 fixes for the server and server group modules.

[Dave Page] Add keyboard shortcuts for the Query Tool. Fixes #2900

[Dave Page] Fix block indent/outdent with configurable width. Fixes #3002

[Dave Page] PEP8 fixes. Fixes #3156

[Dave Page] PEP8 fixes.

[Dave Page] Ensure the pgAgent job start/end time grid fields synchronise with 
the

[Dave Page] Fix handling of tie/datetime array types when adding columns to a 
table.

[Dave Page] PEP8 fixes for the Casts, Event triggers, Extensions and Languages

[Dave Page] Add a makefile target for running PEP8 checks.

[Dave Page] Stupid Makefile syntax

[Dave Page] Handle opening of non-UTF8 compatible files. Fixes #3129

[Dave Page] Undo previous thinko - there's already a check-pep8 target. D'oh.

[Dave Page] Allow text selection/copying from disabled CodeMirror instances. 
Fixes

[Dave Page] Allow copying of SQL from the dashboard tables. Fixes #3137

[Dave Page] PEP8 changes for the FDW modules.

[Dave Page] Fix typo/thinko in access key definition.

[Dave Page] Ensure we can edit grid values in the debugger using keyboard 
shortcuts.

[Dave Page] Support for external tables in GPDB. Fixes #3168

[Dave Page] PEP8 fixes. Fixes #3175

[Dave Page] Disable function statistics on Greenplum. Fixes #3176

[Dave Page] Ensure all messages are retrieved from the server in the Query Tool.

[Dave Page] Update Jasmine to v3. Fixes #3182

[Dave Page] Insert rows correctly when a table has OIDs and a Primary Key in

[Dave Page] Allow admins to disable the use of Gravatar if they choose. Fixes 
#3037

[Dave Page] Make the poll query test a little more robust.

[Dave Page] More hardening of the query tool tests.

[Dave Page] Final PEP-8 fixes

[Dave Page] Always run PEP-8 checks!

[Dave Page] Fix test case for Python 2.

[Dave Page] Minor formatting fix

[Dave Page] Use test methods with @patch rather than directly.

--
[...truncated 82.69 KB...]
Query tool feature test ... 
On demand query result... 
On demand result set 

Re: pgAdmin 4 commit: Support EXPLAIN on Greenplum. Fixes #3097

2018-03-09 Thread Joao De Almeida Pereira
On Fri, Mar 9, 2018 at 11:04 AM Dave Page  wrote:

> Hi
>
> On Fri, Mar 9, 2018 at 3:54 PM, Joao De Almeida Pereira <
> jdealmeidapere...@pivotal.io> wrote:
>
>> Hello,
>> Definitely running single tests is something that would be great,
>> specially if you are TDDing something waiting 30-40 seconds to get feedback
>> is a little cumbersome when the test you are concerned with take less then
>> a second.
>>
>> In the process:
>> 1. Write a test
>> 2. Make the test pass
>> 3. Refactor
>>
>
> Sure, makes sense for development. As I spend 99% of my time reviewing and
> testing these days, I was just relaying my pain points :-)
>
>
>>
>> in between each step you run the test more then 1 time, and depending on
>> the refactoring you might need to run it several times. So imagine waiting
>> 30 seconds per run to get results. To run a subset of tests is a pain
>> because you need to be always changing the way you run the tests.
>>
>> I believe we could archive a better granularity and choosing what test to
>> run if we used a runner like pytest or nose to do it. What was the reason
>> behind handrolling a test runner script? I am asking this because in a
>> previous job I decided to handroll a unittest loader script and that was
>> something that I regretted every time I had to touch it, and eventually was
>> in the process of changing it to pytest.
>>
>
> Pure newbie-ism. I have no objections to changing to something else, if it
> reduces our tech debt.
>
>
>>
>> I looked into pytest to replace the current the current runtest, and the
>> major problem I found was the testscenarios integration(See Note 1). It can
>> be done but we would need to change all the test functions to receive the
>> scenario variables through arguments on the function. Also didn't dug much
>> into setting all the variables that we need there and all the environment.
>> The other issue that I do not like very much about pytest is the fact
>> that you loose the unittest assertion that is not so bad because there are
>> some neat libraries like: https://github.com/grappa-py/grappa,
>> https://github.com/ActivisionGameScience/assertpy,
>> https://github.com/dgilland/verify. Personally I really like the syntax
>> of Grapa, but the Veridfy one is pretty similar to Jasmine too.
>>
>> What are your thoughts?
>>
>
> Huh, I also really like the grappa syntax. It's nice and readable.
>
>
>>
>>
>>
>> Note 1: As an example of what our functions would have to look like you
>> can see:
>> https://github.com/OriMenashe/pytest-scenario/blob/master/tests/test_parametrize.py
>> As a example this class:
>>
>
> Without a diff, it's hard to be sure, but it looks like the only change
> was BaseTestGenerator to object on the first line?
>
See the function signature, that is the cumbersome issue

>
>
>> class ServersWithServiceIDAddTestCase(BaseTestGenerator):
>> """ This class will add the servers under default server group. """
>>
>> scenarios = [
>> # Fetch the default url for server object
>> (
>> 'Default Server Node url', dict(
>> url='/browser/server/obj/'
>> )
>> )
>> ]
>>
>> def setUp(self):
>> pass
>>
>> def runTest(self):
>> """ This function will add the server under default server group."""
>> url = "{0}{1}/".format(self.url, utils.SERVER_GROUP)
>> # Add service name in the config
>> self.server['service'] = "TestDB"
>> response = self.tester.post(
>> url,
>> data=json.dumps(self.server),
>> content_type='html/json'
>> )
>> self.assertEquals(response.status_code, 200)
>> response_data = json.loads(response.data.decode('utf-8'))
>> self.server_id = response_data['node']['_id']
>>
>> def tearDown(self):
>> """This function delete the server from SQLite """
>> utils.delete_server_with_api(self.tester, self.server_id)
>>
>> Would have to look changed to:
>>
>> class ServersWithServiceIDAddTestCase(object):
>> """ This class will add the servers under default server group. """
>>
>> scenarios = [
>> # Fetch the default url for server object
>> (
>> 'Default Server Node url', dict(
>> url='/browser/server/obj/'
>> )
>> )
>> ]
>>
>> def setUp(self):
>> pass
>>
>> def runTest(self, url):
>> """ This function will add the server under default server group."""
>> url = "{0}{1}/".format(url, utils.SERVER_GROUP)
>> # Add service name in the config
>> self.server['service'] = "TestDB"
>> response = self.tester.post(
>> url,
>> data=json.dumps(self.server),
>> content_type='html/json'
>> )
>> self.assertEquals(response.status_code, 200)
>> response_data = json.loads(response.data.decode('utf-8'))
>> self.server_id = response_data['node']['_id']
>>

Re: pgAdmin 4 commit: Support EXPLAIN on Greenplum. Fixes #3097

2018-03-09 Thread Dave Page
On Fri, Mar 9, 2018 at 4:29 PM, Joao De Almeida Pereira <
jdealmeidapere...@pivotal.io> wrote:

>
>
> On Fri, Mar 9, 2018 at 11:04 AM Dave Page  wrote:
>
>> Hi
>>
>> On Fri, Mar 9, 2018 at 3:54 PM, Joao De Almeida Pereira <
>> jdealmeidapere...@pivotal.io> wrote:
>>
>>> Hello,
>>> Definitely running single tests is something that would be great,
>>> specially if you are TDDing something waiting 30-40 seconds to get feedback
>>> is a little cumbersome when the test you are concerned with take less then
>>> a second.
>>>
>>> In the process:
>>> 1. Write a test
>>> 2. Make the test pass
>>> 3. Refactor
>>>
>>
>> Sure, makes sense for development. As I spend 99% of my time reviewing
>> and testing these days, I was just relaying my pain points :-)
>>
>>
>>>
>>> in between each step you run the test more then 1 time, and depending on
>>> the refactoring you might need to run it several times. So imagine waiting
>>> 30 seconds per run to get results. To run a subset of tests is a pain
>>> because you need to be always changing the way you run the tests.
>>>
>>> I believe we could archive a better granularity and choosing what test
>>> to run if we used a runner like pytest or nose to do it. What was the
>>> reason behind handrolling a test runner script? I am asking this because in
>>> a previous job I decided to handroll a unittest loader script and that was
>>> something that I regretted every time I had to touch it, and eventually was
>>> in the process of changing it to pytest.
>>>
>>
>> Pure newbie-ism. I have no objections to changing to something else, if
>> it reduces our tech debt.
>>
>>
>>>
>>> I looked into pytest to replace the current the current runtest, and the
>>> major problem I found was the testscenarios integration(See Note 1). It can
>>> be done but we would need to change all the test functions to receive the
>>> scenario variables through arguments on the function. Also didn't dug much
>>> into setting all the variables that we need there and all the environment.
>>> The other issue that I do not like very much about pytest is the fact
>>> that you loose the unittest assertion that is not so bad because there are
>>> some neat libraries like: https://github.com/grappa-py/grappa, https://
>>> github.com/ActivisionGameScience/assertpy, https://github.com/dgilland/
>>> verify. Personally I really like the syntax of Grapa, but the Veridfy
>>> one is pretty similar to Jasmine too.
>>>
>>> What are your thoughts?
>>>
>>
>> Huh, I also really like the grappa syntax. It's nice and readable.
>>
>>
>>>
>>>
>>>
>>> Note 1: As an example of what our functions would have to look like you
>>> can see: https://github.com/OriMenashe/pytest-scenario/
>>> blob/master/tests/test_parametrize.py
>>> As a example this class:
>>>
>>
>> Without a diff, it's hard to be sure, but it looks like the only change
>> was BaseTestGenerator to object on the first line?
>>
> See the function signature, that is the cumbersome issue
>

Ahh, yes.


>
>>
>>> class ServersWithServiceIDAddTestCase(BaseTestGenerator):
>>> """ This class will add the servers under default server group. """
>>>
>>> scenarios = [
>>> # Fetch the default url for server object
>>> (
>>> 'Default Server Node url', dict(
>>> url='/browser/server/obj/'
>>> )
>>> )
>>> ]
>>>
>>> def setUp(self):
>>> pass
>>>
>>> def runTest(self):
>>> """ This function will add the server under default server group."""
>>> url = "{0}{1}/".format(self.url, utils.SERVER_GROUP)
>>> # Add service name in the config
>>> self.server['service'] = "TestDB"
>>> response = self.tester.post(
>>> url,
>>> data=json.dumps(self.server),
>>> content_type='html/json'
>>> )
>>> self.assertEquals(response.status_code, 200)
>>> response_data = json.loads(response.data.decode('utf-8'))
>>> self.server_id = response_data['node']['_id']
>>>
>>> def tearDown(self):
>>> """This function delete the server from SQLite """
>>> utils.delete_server_with_api(self.tester, self.server_id)
>>>
>>> Would have to look changed to:
>>>
>>> class ServersWithServiceIDAddTestCase(object):
>>> """ This class will add the servers under default server group. """
>>>
>>> scenarios = [
>>> # Fetch the default url for server object
>>> (
>>> 'Default Server Node url', dict(
>>> url='/browser/server/obj/'
>>> )
>>> )
>>> ]
>>>
>>> def setUp(self):
>>> pass
>>>
>>> def runTest(self, url):
>>> """ This function will add the server under default server group."""
>>> url = "{0}{1}/".format(url, utils.SERVER_GROUP)
>>> # Add service name in the config
>>> self.server['service'] = "TestDB"
>>> response = self.tester.post(
>>> url,
>>> data=json.dumps(

Re: ACI Tree

2018-03-09 Thread Joao De Almeida Pereira
Hi Hackers,
Maybe this list might not have a big group of users of the application
itself, nevertheless it would be interesting to understand if this is a
specific problem of GreenPlum Use Case or not.
This issue is preventing a wider adoption of pgAdmin4 by the GreenPlum
users.

>From a preliminary investigation we found the following:
- When we try to retrieve 8k tables from the backend we get a payload of
3.3Mb with the following information:

   1. has_enable_triggers:"0"
   2. icon:"icon-table"
   3. id:"table/831144"
   4. inode:true
   5. is_partitioned:false
   6. label:"act_20141008"
   7. module:"pgadmin.node.table"
   8. rows_cnt:0
   9. tigger_count:"0"
   10. _id:831144
   11. _pid:24579
   12. _type:"table"

- This amount of information take around 12 seconds to be displayed
*- It is pretty hard to find something in set off 8k tables*

We started looking into possibilities to solve this issue, but we bumped
into the ACI Tree again and the way ACI Tree is so ingrained into our code.
In order to try to create a better experience  to these users these are the
steps that we believe need to be done:

1 - Refactor the code so that it doesn't depend on the Tree to run
2 - See if this allows us to have an increased performance.
3 - Instead of adding functionality to a Tree that doesn't look actively
supported, maybe we should look into other trees that are more actively
being worked on
4 - Eventually replace the tree with one that would allow us to have a
smaller footprint and have functionalities like search already embedded.

The last time we tried to take a look at ACI Tree we started by trying to
create a new Tree and see if we could plug it in the current code, and that
approach was not successful, so we believe that this new approach might
gain us the following:
 - Detachment from the tree creating an adapter layer that would eventually
would allow us to swap tree if that is the case
 - Try to simplify the information returned by the backend
 - Stop piggybacking on the alias of requirejs to have things done
 - Steer us into a direction where adding a feature to the tree is
something easy, testable and sustainable.


In a quick search we found 2 libraries that look interesting and actively
being developed:
https://www.npmjs.com/package/react-virtualized-tree
https://www.npmjs.com/package/react-infinite-tree

Here are some pros and cons on the libraries that we found:
react-virtualized-tree:
Pros:
- Actively developed
- Search capabilities
- Users react virtualized, which looks very interesting because it doesn't
dump everything into the dom at once
Cons:
- Single Committer

react-infinite-tree:
Pros:
- Actively developed
- Search capabilities
Cons:
- Single Committer

ACI Tree:
Pros:
- Already in the code
Cons:
- No longer maintained
- No website with documentation
- No search
- Heavy

Thanks
Joao


On Wed, Mar 7, 2018 at 12:19 PM Robert Eckhardt 
wrote:

> Hackers,
>
> We have multiple end users who have in excess of 10 thousand of tables in
> a single schema. Currently this causes pgAdmin to choke.
>
> The major issue we are seeing is that the ACI tree is unsupported and it
> seems to be the backbone of pgAdmin 4.
>
> Is anyone else having this issue?  Is there a solution better that (for
> some definition of better than) replacing the ACI Tree with something more
> performant?
>
> -- Rob
>


[pgadmin4][patch] Unit test fail on GreenPlum (#3190)

2018-03-09 Thread Joao De Almeida Pereira
Hello Hackers,

Attached you can find the patch that skip some tests and correct issues on
SQL that are failing when trying to connect to a GreenPlum database.

We did this by adding a attribute to to test_json called "db_type" that
will carry the type of database we are running tests against.

When we run tests against a GreenPlum instance the configuration would look
like this:

{
  "name": "GreenPlum",
  "comment": "GreenPlum DB",
  "db_username": "gp",
  "host": "localhost",
  "db_password": "",
  "db_port": 5433,
  "maintenance_db": "postgres",
  "sslmode": "prefer",
  "tablespace_path": "",
  "enabled": true,
  "db_type": "gpdb"
}




Thanks
Joao
diff --git a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_add.py b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_add.py
index 81f072f1..de8f9799 100644
--- a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_add.py
+++ b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_add.py
@@ -20,6 +20,7 @@ from . import utils as cast_utils
 
 
 class CastsAddTestCase(BaseTestGenerator):
+skip_on_database = ['gpdb']
 scenarios = [
 # Fetching default URL for cast node.
 ('Check Cast Node', dict(url='/browser/cast/obj/'))
@@ -27,6 +28,7 @@ class CastsAddTestCase(BaseTestGenerator):
 
 def runTest(self):
 """ This function will add cast under test database. """
+super(CastsAddTestCase, self).runTest()
 self.server_data = parent_node_dict["database"][-1]
 self.server_id = self.server_data["server_id"]
 self.db_id = self.server_data['db_id']
diff --git a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_delete.py b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_delete.py
index 46e2a013..b956fcbc 100644
--- a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_delete.py
+++ b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_delete.py
@@ -19,12 +19,14 @@ from . import utils as cast_utils
 
 class CastsDeleteTestCase(BaseTestGenerator):
 """ This class will delete the cast node added under database node. """
+skip_on_database = ['gpdb']
 scenarios = [
 # Fetching default URL for cast node.
 ('Check Cast Node', dict(url='/browser/cast/obj/'))
 ]
 
 def setUp(self):
+super(CastsDeleteTestCase, self).setUp()
 self.default_db = self.server["db"]
 self.database_info = parent_node_dict['database'][-1]
 self.db_name = self.database_info['db_name']
diff --git a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_get.py b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_get.py
index 329162eb..d67f55ae 100644
--- a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_get.py
+++ b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_get.py
@@ -19,6 +19,7 @@ from . import utils as cast_utils
 
 class CastsGetTestCase(BaseTestGenerator):
 """ This class will fetch the cast node added under database node. """
+skip_on_database = ['gpdb']
 scenarios = [
 # Fetching default URL for cast node.
 ('Check Cast Node', dict(url='/browser/cast/obj/'))
@@ -26,6 +27,7 @@ class CastsGetTestCase(BaseTestGenerator):
 
 def setUp(self):
 """ This function will create cast."""
+super(CastsGetTestCase, self).setUp()
 self.default_db = self.server["db"]
 self.database_info = parent_node_dict['database'][-1]
 self.db_name = self.database_info['db_name']
diff --git a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_put.py b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_put.py
index f3e43ae9..99485095 100644
--- a/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_put.py
+++ b/web/pgadmin/browser/server_groups/servers/databases/casts/tests/test_cast_put.py
@@ -21,6 +21,7 @@ from . import utils as cast_utils
 
 class CastsPutTestCase(BaseTestGenerator):
 """ This class will fetch the cast node added under database node. """
+skip_on_database = ['gpdb']
 scenarios = [
 # Fetching default URL for cast node.
 ('Check Cast Node', dict(url='/browser/cast/obj/'))
@@ -28,6 +29,7 @@ class CastsPutTestCase(BaseTestGenerator):
 
 def setUp(self):
 """ This function will create cast."""
+super(CastsPutTestCase, self).setUp()
 self.default_db = self.server["db"]
 self.database_info = parent_node_dict['database'][-1]
 self.db_name = self.database_info['db_name']
diff --git a/web/pgadmin/browser/server_groups/servers/databases/extensions/tests/test_extension_add.py b/web/pgadmin/browser/server_groups/servers/databases/extensions/tests/test_extension_add.py
index 96bf6343..f4398d55 100644
---

Re: ACI Tree

2018-03-09 Thread Murtuza Zabuawala
On Sat, Mar 10, 2018 at 2:04 AM, Joao De Almeida Pereira <
jdealmeidapere...@pivotal.io> wrote:

> Hi Hackers,
> Maybe this list might not have a big group of users of the application
> itself, nevertheless it would be interesting to understand if this is a
> specific problem of GreenPlum Use Case or not.
> This issue is preventing a wider adoption of pgAdmin4 by the GreenPlum
> users.
>
> From a preliminary investigation we found the following:
> - When we try to retrieve 8k tables from the backend we get a payload of
> 3.3Mb with the following information:
>
>1. has_enable_triggers:"0"
>2. icon:"icon-table"
>3. id:"table/831144"
>4. inode:true
>5. is_partitioned:false
>6. label:"act_20141008"
>7. module:"pgadmin.node.table"
>8. rows_cnt:0
>9. tigger_count:"0"
>10. _id:831144
>11. _pid:24579
>12. _type:"table"
>
> - This amount of information take around 12 seconds to be displayed
> *- It is pretty hard to find something in set off 8k tables*
>
> We started looking into possibilities to solve this issue, but we bumped
> into the ACI Tree again and the way ACI Tree is so ingrained into our code.
> In order to try to create a better experience  to these users these are
> the steps that we believe need to be done:
>
> 1 - Refactor the code so that it doesn't depend on the Tree to run
> 2 - See if this allows us to have an increased performance.
> 3 - Instead of adding functionality to a Tree that doesn't look actively
> supported, maybe we should look into other trees that are more actively
> being worked on
> 4 - Eventually replace the tree with one that would allow us to have a
> smaller footprint and have functionalities like search already embedded.
>
> The last time we tried to take a look at ACI Tree we started by trying to
> create a new Tree and see if we could plug it in the current code, and that
> approach was not successful, so we believe that this new approach might
> gain us the following:
>  - Detachment from the tree creating an adapter layer that would
> eventually would allow us to swap tree if that is the case
>  - Try to simplify the information returned by the backend
>  - Stop piggybacking on the alias of requirejs to have things done
>  - Steer us into a direction where adding a feature to the tree is
> something easy, testable and sustainable.
>

>
> In a quick search we found 2 libraries that look interesting and actively
> being developed:
> https://www.npmjs.com/package/react-virtualized-tree
> https://www.npmjs.com/package/react-infinite-tree
>
> Here are some pros and cons on the libraries that we found:
> react-virtualized-tree:
> Pros:
> - Actively developed
> - Search capabilities
> - Users react virtualized, which looks very interesting because it doesn't
> dump everything into the dom at once
> Cons:
> - Single Committer
>
​
+1​


>
> react-infinite-tree:
> Pros:
> - Actively developed
> - Search capabilities
> Cons:
> - Single Committer
>
> ACI Tree:
> Pros:
> - Already in the code
> Cons:
> - No longer maintained
> - No website with documentation
> - No search
> - Heavy
>
> Thanks
> Joao
> ​
>
>

>
> On Wed, Mar 7, 2018 at 12:19 PM Robert Eckhardt 
> wrote:
>
>> Hackers,
>>
>> We have multiple end users who have in excess of 10 thousand of tables in
>> a single schema. Currently this causes pgAdmin to choke.
>>
>> The major issue we are seeing is that the ACI tree is unsupported and it
>> seems to be the backbone of pgAdmin 4.
>>
>> Is anyone else having this issue?  Is there a solution better that (for
>> some definition of better than) replacing the ACI Tree with something more
>> performant?
>>
>> -- Rob
>>
>