Hi, As the title suggests I'm trying to download a report from the Adwords API using Python and put the data into a Pandas DataFrame, but I am having no luck. I'm still new to Python so my error may be a lack of basic understanding. I've tried multiple ways of solving this problem, but below is my python script to get the output into a Pandas DataFrame.
from googleads import adwordsimport pandas as pdimport numpy as np # Initialize appropriate service. adwords_client = adwords.AdWordsClient.LoadFromStorage() report_downloader = adwords_client.GetReportDownloader(version='v201710') # Create report query. report_query = (''' select Date, Clicks from ACCOUNT_PERFORMANCE_REPORT during LAST_7_DAYS''') df = pd.read_csv(report_downloader.DownloadReportWithAwql( report_query, 'CSV', client_customer_id='xxx-xxx-xxxx', # denotes which adw account to pull from skip_report_header=True, skip_column_header=False, skip_report_summary=True, include_zero_impressions=True)) And this is the error that I am getting. Day,Clicks2017-11-05,420612017-11-07,457922017-11-03,368742017-11-02,397902017-11-06,449342017-11-08,456312017-11-04,36031---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-5-cc25e32c9f3a> in <module>() 25 skip_column_header=False, 26 skip_report_summary=True,---> 27 include_zero_impressions=True)) /anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 653 skip_blank_lines=skip_blank_lines) 654 --> 655 return _read(filepath_or_buffer, kwds) 656 657 parser_f.__name__ = name /anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 390 compression = _infer_compression(filepath_or_buffer, compression) 391 filepath_or_buffer, _, compression = get_filepath_or_buffer(--> 392 filepath_or_buffer, encoding, compression) 393 kwds['compression'] = compression 394 /anaconda/lib/python3.6/site-packages/pandas/io/common.py in get_filepath_or_buffer(filepath_or_buffer, encoding, compression) 208 if not is_file_like(filepath_or_buffer): 209 msg = "Invalid file path or buffer object type: {_type}"--> 210 raise ValueError(msg.format(_type=type(filepath_or_buffer))) 211 212 return filepath_or_buffer, None, compression ValueError: Invalid file path or buffer object type: <class 'NoneType'> Any help would be greatly appreciated -- -- =~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~ Also find us on our blog and Google+: https://googleadsdeveloper.blogspot.com/ https://plus.google.com/+GoogleAdsDevelopers/posts =~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~ You received this message because you are subscribed to the Google Groups "AdWords API Forum" group. To post to this group, send email to adwords-api@googlegroups.com To unsubscribe from this group, send email to adwords-api+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/adwords-api?hl=en --- You received this message because you are subscribed to the Google Groups "AdWords API Forum" group. To unsubscribe from this group and stop receiving emails from it, send an email to adwords-api+unsubscr...@googlegroups.com. Visit this group at https://groups.google.com/group/adwords-api. To view this discussion on the web visit https://groups.google.com/d/msgid/adwords-api/a5d0ba31-72a0-4a48-87b8-d604efffbedb%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.