Re: [VOTE] FLIP-193: Snapshots ownership
+1 Thanks for the efforts Dawid! Best Regards, Yu On Fri, 3 Dec 2021 at 16:20, Yun Tang wrote: > +1 > > Thanks for driving this, Dawid. > > Best > Yun Tang > > From: Roman Khachatryan > Sent: Thursday, December 2, 2021 17:02 > To: dev > Subject: Re: [VOTE] FLIP-193: Snapshots ownership > > +1 > > Thanks for driving this effort Dawid > > Regards, > Roman > > > On Wed, Dec 1, 2021 at 2:04 PM Konstantin Knauf wrote: > > > > Thanks, Dawid. > > > > +1 > > > > On Wed, Dec 1, 2021 at 1:23 PM Dawid Wysakowicz > > wrote: > > > > > Dear devs, > > > > > > I'd like to open a vote on FLIP-193: Snapshots ownership [1] which was > > > discussed in this thread [2]. > > > The vote will be open for at least 72 hours unless there is an > objection or > > > not enough votes. > > > > > > Best, > > > > > > Dawid > > > > > > [1] https://cwiki.apache.org/confluence/x/bIyqCw > > > > > > [2] https://lists.apache.org/thread/zw2crf0c7t7t4cb5cwcwjpvsb3r1ovz2 > > > > > > > > > > > > > -- > > > > Konstantin Knauf > > > > https://twitter.com/snntrable > > > > https://github.com/knaufk >
[jira] [Created] (FLINK-25168) Azure failed due to unable to transfer maven artifacts
Yun Gao created FLINK-25168: --- Summary: Azure failed due to unable to transfer maven artifacts Key: FLINK-25168 URL: https://issues.apache.org/jira/browse/FLINK-25168 Project: Flink Issue Type: Bug Components: Build System / Azure Pipelines Affects Versions: 1.13.3 Reporter: Yun Gao {code:java} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.2:deploy (default-deploy) on project flink-tests: Failed to deploy artifacts: Could not transfer artifact org.apache.flink:flink-tests:jar:1.13-20211205.020632-728 from/to apache.snapshots.https (https://repository.apache.org/content/repositories/snapshots): Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/flink/flink-tests/1.13-SNAPSHOT/flink-tests-1.13-20211205.020632-728.jar. Return code is: 502, ReasonPhrase: Proxy Error. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :flink-tests ##[error]Bash exited with code '1'. Finishing: Deploy maven snapshot {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27560&view=logs&j=eca6b3a6-1600-56cc-916a-c549b3cde3ff&t=e9844b5e-5aa3-546b-6c3e-5395c7c0cac7&l=97156 -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [ANNOUNCE] Open source of remote shuffle project for Flink batch data processing
As one of the contributors of flink remote shuffle, I'm glad to hear all the warm responses! Welcome more people to try the flink remote shuffle and look forward to your feedback. Best, Lijie Yingjie Cao 于2021年12月1日周三 17:50写道: > Hi Jiangang, > > Great to hear that, welcome to work together to make the project better. > > Best, > Yingjie > > 刘建刚 于2021年12月1日周三 下午3:27写道: > >> Good work for flink's batch processing! >> Remote shuffle service can resolve the container lost problem and reduce >> the running time for batch jobs once failover. We have investigated the >> component a lot and welcome Flink's native solution. We will try it and >> help improve it. >> >> Thanks, >> Liu Jiangang >> >> Yingjie Cao 于2021年11月30日周二 下午9:33写道: >> >> > Hi dev & users, >> > >> > We are happy to announce the open source of remote shuffle project [1] >> for >> > Flink. The project is originated in Alibaba and the main motivation is >> to >> > improve batch data processing for both performance & stability and >> further >> > embrace cloud native. For more features about the project, please refer >> to >> > [1]. >> > >> > Before going open source, the project has been used widely in production >> > and it behaves well on both stability and performance. We hope you enjoy >> > it. Collaborations and feedbacks are highly appreciated. >> > >> > Best, >> > Yingjie on behalf of all contributors >> > >> > [1] https://github.com/flink-extended/flink-remote-shuffle >> > >> >
[jira] [Created] (FLINK-25169) CsvFileSystemFormatFactory#CsvInputFormat supports recursion
Bo Cui created FLINK-25169: -- Summary: CsvFileSystemFormatFactory#CsvInputFormat supports recursion Key: FLINK-25169 URL: https://issues.apache.org/jira/browse/FLINK-25169 Project: Flink Issue Type: Improvement Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.14.0, 1.13.0, 1.12.0, 1.15.0 Reporter: Bo Cui https://github.com/apache/flink/blob/ca4fbd10a1e8919c48e602640bc3238648cc48bb/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFileSystemFormatFactory.java#L117 Why CsvFileSystemFormatFactory#CsvInputFormat doesn't support file recursion? -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25170) cep supports dynamic rule updates
ZhuoYu Chen created FLINK-25170: --- Summary: cep supports dynamic rule updates Key: FLINK-25170 URL: https://issues.apache.org/jira/browse/FLINK-25170 Project: Flink Issue Type: New Feature Components: Library / CEP Affects Versions: 1.15.0 Reporter: ZhuoYu Chen -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25171) When the DDL statement was executed, the column names of the Derived Columns were not validated
shouzuo meng created FLINK-25171: Summary: When the DDL statement was executed, the column names of the Derived Columns were not validated Key: FLINK-25171 URL: https://issues.apache.org/jira/browse/FLINK-25171 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.14.0 Reporter: shouzuo meng Fix For: 1.14.0 When the DDL statement was executed, columns in sourceTableSchema were validated, but the column names of derived columns were not validated -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk
Congratulations, Ingo! Well Deserved. Best, Leonard > 2021年12月3日 下午11:24,Ingo Bürk 写道: > > Thank you everyone for the warm welcome! > > > Best > Ingo > > On Fri, Dec 3, 2021 at 11:47 AM Ryan Skraba > wrote: > >> Congratulations Ingo! >> >> On Fri, Dec 3, 2021 at 8:17 AM Yun Tang wrote: >> >>> Congratulations, Ingo! >>> >>> Best >>> Yun Tang >>> >>> From: Yuepeng Pan >>> Sent: Friday, December 3, 2021 14:14 >>> To: dev@flink.apache.org >>> Cc: Ingo Bürk >>> Subject: Re:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk >>> >>> >>> >>> >>> Congratulations, Ingo! >>> >>> >>> Best, >>> Yuepeng Pan >>> >>> >>> >>> >>> >>> At 2021-12-03 13:47:38, "Yun Gao" wrote: Congratulations Ingo! Best, Yun -- From:刘建刚 Send Time:2021 Dec. 3 (Fri.) 11:52 To:dev Cc:"Ingo Bürk" Subject:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk Congratulations! Best, Liu Jiangang Till Rohrmann 于2021年12月2日周四 下午11:24写道: > Hi everyone, > > On behalf of the PMC, I'm very happy to announce Ingo Bürk as a new >>> Flink > committer. > > Ingo has started contributing to Flink since the beginning of this >>> year. He > worked mostly on SQL components. He has authored many PRs and helped >>> review > a lot of other PRs in this area. He actively reported issues and >> helped >>> our > users on the MLs. His most notable contributions were Support SQL 2016 >>> JSON > functions in Flink SQL (FLIP-90), Register sources/sinks in Table API > (FLIP-129) and various other contributions in the SQL area. Moreover, >>> he is > one of the few people in our community who actually understands >> Flink's > frontend. > > Please join me in congratulating Ingo for becoming a Flink committer! > > Cheers, > Till > >>> >>
Re: [ANNOUNCE] New Apache Flink Committer - Matthias Pohl
Congratulations Matthias! Best, Leonard > 2021年12月3日 下午11:23,Matthias Pohl 写道: > > Thank you! I'm looking forward to continue working with you. > > On Fri, Dec 3, 2021 at 7:29 AM Jingsong Li wrote: > >> Congratulations, Matthias! >> >> On Fri, Dec 3, 2021 at 2:13 PM Yuepeng Pan wrote: >>> >>> Congratulations Matthias! >>> >>> Best,Yuepeng Pan. >>> 在 2021-12-03 13:47:20,"Yun Gao" 写道: Congratulations Matthias! Best, Yun -- From:Jing Zhang Send Time:2021 Dec. 3 (Fri.) 13:45 To:dev Cc:Matthias Pohl Subject:Re: [ANNOUNCE] New Apache Flink Committer - Matthias Pohl Congratulations, Matthias! 刘建刚 于2021年12月3日周五 11:51写道: > Congratulations! > > Best, > Liu Jiangang > > Till Rohrmann 于2021年12月2日周四 下午11:28写道: > >> Hi everyone, >> >> On behalf of the PMC, I'm very happy to announce Matthias Pohl as a >> new >> Flink committer. >> >> Matthias has worked on Flink since August last year. He helped >> review a > ton >> of PRs. He worked on a variety of things but most notably the >> tracking > and >> reporting of concurrent exceptions, fixing HA bugs and deprecating >> and >> removing our Mesos support. He actively reports issues helping >> Flink to >> improve and he is actively engaged in Flink's MLs. >> >> Please join me in congratulating Matthias for becoming a Flink >> committer! >> >> Cheers, >> Till >> > >> >> >> >> -- >> Best, Jingsong Lee
[jira] [Created] (FLINK-25172) Fix Document display bug
quyanghaoren created FLINK-25172: Summary: Fix Document display bug Key: FLINK-25172 URL: https://issues.apache.org/jira/browse/FLINK-25172 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.14.0 Environment: *Environment:* Product environment. Stable 1.14 Reporter: quyanghaoren Fix For: 1.14.0 Attachments: image-2021-12-06-14-36-14-666.png The character style of the official website is abnormal. https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/tableapi/ !image-2021-12-06-14-36-14-666.png! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25173) Introduce CatalogLock
Jingsong Lee created FLINK-25173: Summary: Introduce CatalogLock Key: FLINK-25173 URL: https://issues.apache.org/jira/browse/FLINK-25173 Project: Flink Issue Type: Sub-task Components: Connectors / Hive, Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 {panel} {panel} |{{/**}} {{ }}{{* An interface that allows source and sink to use global lock to some transaction-related things.}} {{ }}{{*/}} {{@Internal}} {{public}} {{interface}} {{CatalogLock }}{{extends}} {{Closeable {}} {{}}{{/** Run with catalog lock. The caller should tell catalog the database and table name. */}} {{}}{{ T runWithLock(String database, String table, Callable callable) }}{{throws}} {{Exception;}} {{}}{{/** Factory to create \{@link CatalogLock}. */}} {{}}{{interface}} {{Factory }}{{extends}} {{Serializable {}} {{}}{{CatalogLock create();}} {{}}{{}}} {{}}}| Currently, only HiveCatalog can provide this catalog lock. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25174) Introduce ManagedTableFactory
Jingsong Lee created FLINK-25174: Summary: Introduce ManagedTableFactory Key: FLINK-25174 URL: https://issues.apache.org/jira/browse/FLINK-25174 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 We need an interface to discover the managed table factory implementation for managed table: {{/**}} {{ }}{{* Base interface for configuring a managed dynamic table connector. The managed table factory is}} {{ }}{{* used when there is no \{@link FactoryUtil#CONNECTOR} option.}} {{ }}{{*/}} {{@Internal}} {{public}} {{interface}} {{ManagedTableFactory }}{{extends}} {{DynamicTableFactory {}} {{}}{{@Override}} {{}}{{default}} {{String factoryIdentifier() {}} {{}}{{return}} {{{}""{}}}{{{};{}}} {{}}{{}}} {{}}{{/**}} {{ }}{{* Enrich options from catalog and session information.}} {{ }}{{*}} {{ }}{{* @return new options of this table.}} {{ }}{{*/}} {{}}{{Map enrichOptions(Context context);}} {{}}{{/** Notifies the listener that a table creation occurred. */}} {{}}{{void}} {{onCreateTable(Context context);}} {{}}{{/** Notifies the listener that a table drop occurred. */}} {{}}{{void}} {{onDropTable(Context context);}} {{}}} {{}} A catalog that supports built-in dynamic table needs to implement the method in the Catalog (The GenericInMemoryCatalog and HiveCatalog will implement this method): {{/**}} {{ }}{{* If return true, the Table without specified connector will be translated to the Flink managed table.}} {{ }}{{* See \{@link CatalogBaseTable.TableKind#MANAGED}}} {{ }}{{*/}} {{default}} {{boolean}} {{supportsManagedTable {}} {{}}{{return}} {{{}false{}}}{{{};{}}} {{}}} {{}} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25175) Introduce TableDescriptor.forManaged
Jingsong Lee created FLINK-25175: Summary: Introduce TableDescriptor.forManaged Key: FLINK-25175 URL: https://issues.apache.org/jira/browse/FLINK-25175 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 Introduce table api for managed table {code:java} @PublicEvolving public class TableDescriptor { /** Creates a new {@link Builder} for a managed dynamic table. */ public static Builder forManaged() { return new Builder(); } ... } {code} {{}} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25176) Introduce "ALTER TABLE ... COMPACT" SQL
Jingsong Lee created FLINK-25176: Summary: Introduce "ALTER TABLE ... COMPACT" SQL Key: FLINK-25176 URL: https://issues.apache.org/jira/browse/FLINK-25176 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 * Introduce "ALTER TABLE ... COMPACT" SQL * Work with managed table -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25177) Support "DESCRIBE TABLE EXTENDED" with managed table
Jingsong Lee created FLINK-25177: Summary: Support "DESCRIBE TABLE EXTENDED" with managed table Key: FLINK-25177 URL: https://issues.apache.org/jira/browse/FLINK-25177 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 Expose informations in FLIP: https://cwiki.apache.org/confluence/display/FLINK/FLIP-188:+Introduce+Built-in+Dynamic+Table+Storage -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25178) Throw exception when managed table sink needs checkpoint
Jingsong Lee created FLINK-25178: Summary: Throw exception when managed table sink needs checkpoint Key: FLINK-25178 URL: https://issues.apache.org/jira/browse/FLINK-25178 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jingsong Lee Fix For: 1.15.0 In the past, many users encountered the problem that the sink did not output because they did not open the checkpoint. For the managed table: The planner will throw an exception if the checkpoint is not turned on. (Later we can add public connector interface, including Filesystem, Hive, Iceberg, Hudi need it). -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25179) Add document for array,map,row types support for parquet row writer
Jingsong Lee created FLINK-25179: Summary: Add document for array,map,row types support for parquet row writer Key: FLINK-25179 URL: https://issues.apache.org/jira/browse/FLINK-25179 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Jingsong Lee Fix For: 1.15.0 -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk
Congratulations, Ingo! On Mon, Dec 6, 2021 at 7:32 AM Leonard Xu wrote: > Congratulations, Ingo! Well Deserved. > > Best, > Leonard > > > 2021年12月3日 下午11:24,Ingo Bürk 写道: > > > > Thank you everyone for the warm welcome! > > > > > > Best > > Ingo > > > > On Fri, Dec 3, 2021 at 11:47 AM Ryan Skraba > > > wrote: > > > >> Congratulations Ingo! > >> > >> On Fri, Dec 3, 2021 at 8:17 AM Yun Tang wrote: > >> > >>> Congratulations, Ingo! > >>> > >>> Best > >>> Yun Tang > >>> > >>> From: Yuepeng Pan > >>> Sent: Friday, December 3, 2021 14:14 > >>> To: dev@flink.apache.org > >>> Cc: Ingo Bürk > >>> Subject: Re:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk > >>> > >>> > >>> > >>> > >>> Congratulations, Ingo! > >>> > >>> > >>> Best, > >>> Yuepeng Pan > >>> > >>> > >>> > >>> > >>> > >>> At 2021-12-03 13:47:38, "Yun Gao" > wrote: > Congratulations Ingo! > > Best, > Yun > > > -- > From:刘建刚 > Send Time:2021 Dec. 3 (Fri.) 11:52 > To:dev > Cc:"Ingo Bürk" > Subject:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk > > Congratulations! > > Best, > Liu Jiangang > > Till Rohrmann 于2021年12月2日周四 下午11:24写道: > > > Hi everyone, > > > > On behalf of the PMC, I'm very happy to announce Ingo Bürk as a new > >>> Flink > > committer. > > > > Ingo has started contributing to Flink since the beginning of this > >>> year. He > > worked mostly on SQL components. He has authored many PRs and helped > >>> review > > a lot of other PRs in this area. He actively reported issues and > >> helped > >>> our > > users on the MLs. His most notable contributions were Support SQL > 2016 > >>> JSON > > functions in Flink SQL (FLIP-90), Register sources/sinks in Table API > > (FLIP-129) and various other contributions in the SQL area. Moreover, > >>> he is > > one of the few people in our community who actually understands > >> Flink's > > frontend. > > > > Please join me in congratulating Ingo for becoming a Flink committer! > > > > Cheers, > > Till > > > >>> > >> > > -- Best regards, Sergey
Re: [ANNOUNCE] New Apache Flink Committer - Matthias Pohl
Congratulations, Matthias! On Mon, Dec 6, 2021 at 7:33 AM Leonard Xu wrote: > Congratulations Matthias! > > Best, > Leonard > > 2021年12月3日 下午11:23,Matthias Pohl 写道: > > > > Thank you! I'm looking forward to continue working with you. > > > > On Fri, Dec 3, 2021 at 7:29 AM Jingsong Li > wrote: > > > >> Congratulations, Matthias! > >> > >> On Fri, Dec 3, 2021 at 2:13 PM Yuepeng Pan wrote: > >>> > >>> Congratulations Matthias! > >>> > >>> Best,Yuepeng Pan. > >>> 在 2021-12-03 13:47:20,"Yun Gao" 写道: > Congratulations Matthias! > > Best, > Yun > > > -- > From:Jing Zhang > Send Time:2021 Dec. 3 (Fri.) 13:45 > To:dev > Cc:Matthias Pohl > Subject:Re: [ANNOUNCE] New Apache Flink Committer - Matthias Pohl > > Congratulations, Matthias! > > 刘建刚 于2021年12月3日周五 11:51写道: > > > Congratulations! > > > > Best, > > Liu Jiangang > > > > Till Rohrmann 于2021年12月2日周四 下午11:28写道: > > > >> Hi everyone, > >> > >> On behalf of the PMC, I'm very happy to announce Matthias Pohl as a > >> new > >> Flink committer. > >> > >> Matthias has worked on Flink since August last year. He helped > >> review a > > ton > >> of PRs. He worked on a variety of things but most notably the > >> tracking > > and > >> reporting of concurrent exceptions, fixing HA bugs and deprecating > >> and > >> removing our Mesos support. He actively reports issues helping > >> Flink to > >> improve and he is actively engaged in Flink's MLs. > >> > >> Please join me in congratulating Matthias for becoming a Flink > >> committer! > >> > >> Cheers, > >> Till > >> > > > >> > >> > >> > >> -- > >> Best, Jingsong Lee > > -- Best regards, Sergey
[jira] [Created] (FLINK-25180) Jepsen test fails while setting up libzip4
Till Rohrmann created FLINK-25180: - Summary: Jepsen test fails while setting up libzip4 Key: FLINK-25180 URL: https://issues.apache.org/jira/browse/FLINK-25180 Project: Flink Issue Type: Bug Components: Test Infrastructure Affects Versions: 1.15.0 Reporter: Till Rohrmann Fix For: 1.15.0 The Jepsen tests fail from time to time while trying to set up libzip4. {code} java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: throw+: {:type :jepsen.control/nonzero-exit, :cmd "sudo -S -u root bash -c \"cd /; env DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes libzip4\"", :exit -1, :out "Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: libzip4 0 upgraded, 1 newly installed, 0 to remove and 120 not upgraded. Need to get 40.6 kB of archives. After this operation, 103 kB of additional disk space will be used. Get:1 http://cdn-aws.deb.debian.org/debian stretch/main amd64 libzip4 amd64 1.1.2-1.1+b1 [40.6 kB] Fetched 40.6 kB in 0s (0 B/s) Selecting previously unselected package libzip4:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 49065 files and directories currently installed.) Preparing to unpack .../libzip4_1.1.2-1.1+b1_amd64.deb ... Unpacking libzip4:amd64 (1.1.2-1.1+b1) ... Setting up libzip4:amd64 (1.1.2-1.1+b1) ... ", :err "", :host "172.31.4.8", :action {:cmd "sudo -S -u root bash -c \"cd /; env DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes libzip4\"", :in "root "}} {:type :jepsen.control/nonzero-exit, :cmd "sudo -S -u root bash -c \"cd /; env DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes libzip4\"", :exit -1, :out "Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: libzip4 0 upgraded, 1 newly installed, 0 to remove and 120 not upgraded. Need to get 40.6 kB of archives. After this operation, 103 kB of additional disk space will be used. Get:1 http://cdn-aws.deb.debian.org/debian stretch/main amd64 libzip4 amd64 1.1.2-1.1+b1 [40.6 kB] Fetched 40.6 kB in 0s (0 B/s) Selecting previously unselected package libzip4:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 49065 files and directories currently installed.) Preparing to unpack .../libzip4_1.1.2-1.1+b1_amd64.deb ... Unpacking libzip4:amd64 (1.1.2-1.1+b1) ... Setting up libzip4:amd64 (1.1.2-1.1+b1) ... ", :err "", :host "172.31.4.8", :action {:cmd "sudo -S -u root bash -c \"cd /; env DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes libzip4\"", :in "root"}} {code} https://app.travis-ci.com/github/dataArtisans/flink-jepsen-ci/jobs/550915650#L1300 -- This message was sent by Atlassian Jira (v8.20.1#820001)