26/03/27 08:00:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:00:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:00:58 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:01:16 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:01:16 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 211 milliseconds) 26/03/27 08:01:16 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:01:16 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:01:16 INFO MetastoreMonitor: PoPProxy healthcheck successful (connection duration = 155 milliseconds) 26/03/27 08:01:19 INFO DeadlockDetector: Requested deadlock detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:01:19 INFO HangingThreadDetector: Requested hanging thread detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:01:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:01:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:01:35 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the delegate. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3n. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasbs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme gs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3a. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme r2. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfss. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasb. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:01:35 INFO DatabricksFileSystemV2Factory: Creating abfss file system for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net 26/03/27 08:01:35 INFO AzureBlobFileSystem:V3: Initializing AzureBlobFileSystem for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net/957157258232692 with credential = FixedSASTokenProvider with jvmId = 901 26/03/27 08:01:35 INFO DriverCorral: DBFS health check ok 26/03/27 08:01:35 INFO HiveMetaStore: 0: get_database: default 26/03/27 08:01:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 26/03/27 08:01:36 INFO DriverCorral: Metastore health check ok 26/03/27 08:01:58 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:02:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:02:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:02:42 INFO ProgressReporter$: [599 occurrences] Reporting partial results for running commands: 1774596962274_6771957924431831931_job-718792142532376-run-54758330324283-action-5529254552433704 26/03/27 08:02:58 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:03:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:03:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:03:58 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:04:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:04:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:04:58 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:05:28 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:05:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:05:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:06:15 INFO ThreadDumpManager: ThreadDumpManager: thread dump requested due to Reason: {DAG_SCHEDULER_NO_ACTIVE_JOB.toString}, Count: 2 26/03/27 08:06:15 INFO ThreadDumpManager: ThreadDumpManager: thread dump took 6 ms. 26/03/27 08:06:15 INFO ThreadDumpManager: ThreadDumpManager: updated thread dump for 310 threads, removed 4 inactive threads. 26/03/27 08:06:15 INFO HmrNonQueryEventLogger: NQL logging rate limit was initialized to : 1 log lines per call site per second, bypass allow list: AppStatusListener.scala 26/03/27 08:06:15 WARN DeadlockDetector: Deadlock detection requested for the following reasons: Reason: {DAG_SCHEDULER_NO_ACTIVE_JOB.toString}, Count: 5 Deadlock detector did not find any deadlocks, but it is likely that a component is hanging. 26/03/27 08:06:16 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:06:16 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 190 milliseconds) 26/03/27 08:06:16 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:06:16 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:06:16 INFO MetastoreMonitor: PoPProxy healthcheck successful (connection duration = 166 milliseconds) 26/03/27 08:06:19 INFO DeadlockDetector: Requested deadlock detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:06:19 INFO HangingThreadDetector: Requested hanging thread detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:06:20 INFO ContextCleaner: Skipping full GC. Observed GC latency: 0 s,# skipped GCs: 0 26/03/27 08:06:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:06:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:06:35 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the delegate. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3n. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasbs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme gs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3a. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme r2. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfss. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasb. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:06:35 INFO DatabricksFileSystemV2Factory: Creating abfss file system for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net 26/03/27 08:06:35 INFO AzureBlobFileSystem:V3: Initializing AzureBlobFileSystem for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net/957157258232692 with credential = FixedSASTokenProvider with jvmId = 901 26/03/27 08:06:35 INFO DriverCorral: DBFS health check ok 26/03/27 08:06:35 INFO HiveMetaStore: 0: get_database: default 26/03/27 08:06:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 26/03/27 08:06:36 INFO DriverCorral: Metastore health check ok 26/03/27 08:06:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:07:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:07:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:07:42 INFO ProgressReporter$: [600 occurrences] Reporting partial results for running commands: 1774596962274_6771957924431831931_job-718792142532376-run-54758330324283-action-5529254552433704 26/03/27 08:07:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:08:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:08:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:08:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:09:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:09:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:09:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:10:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:10:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:10:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:11:16 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:11:16 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:11:16 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:11:17 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 165 milliseconds) 26/03/27 08:11:17 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:11:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:11:17 INFO MetastoreMonitor: PoPProxy healthcheck successful (connection duration = 160 milliseconds) 26/03/27 08:11:19 INFO DeadlockDetector: Requested deadlock detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:11:19 INFO HangingThreadDetector: Requested hanging thread detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:11:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:11:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:11:35 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the delegate. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3n. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasbs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme gs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3a. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme r2. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfss. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasb. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:11:35 INFO DatabricksFileSystemV2Factory: Creating abfss file system for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net 26/03/27 08:11:35 INFO AzureBlobFileSystem:V3: Initializing AzureBlobFileSystem for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net/957157258232692 with credential = FixedSASTokenProvider with jvmId = 901 26/03/27 08:11:35 INFO DriverCorral: DBFS health check ok 26/03/27 08:11:35 INFO HiveMetaStore: 0: get_database: default 26/03/27 08:11:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 26/03/27 08:11:36 INFO DriverCorral: Metastore health check ok 26/03/27 08:11:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:12:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:12:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:12:42 INFO ProgressReporter$: [599 occurrences] Reporting partial results for running commands: 1774596962274_6771957924431831931_job-718792142532376-run-54758330324283-action-5529254552433704 26/03/27 08:12:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:13:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:13:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:13:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:14:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:14:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:14:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:15:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:15:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:15:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:16:15 INFO ThreadDumpManager: ThreadDumpManager: thread dump requested due to Reason: {DAG_SCHEDULER_NO_ACTIVE_JOB.toString}, Count: 2 26/03/27 08:16:15 INFO ThreadDumpManager: ThreadDumpManager: thread dump took 5 ms. 26/03/27 08:16:15 INFO ThreadDumpManager: ThreadDumpManager: updated thread dump for 308 threads, removed 2 inactive threads. 26/03/27 08:16:17 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:16:17 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 215 milliseconds) 26/03/27 08:16:17 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:16:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:16:17 INFO MetastoreMonitor: PoPProxy healthcheck successful (connection duration = 174 milliseconds) 26/03/27 08:16:19 INFO DeadlockDetector: Requested deadlock detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:16:19 INFO HangingThreadDetector: Requested hanging thread detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:16:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:16:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:16:35 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the delegate. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3n. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasbs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme gs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3a. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme r2. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfss. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasb. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:16:35 INFO DatabricksFileSystemV2Factory: Creating abfss file system for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net 26/03/27 08:16:35 INFO AzureBlobFileSystem:V3: Initializing AzureBlobFileSystem for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net/957157258232692 with credential = FixedSASTokenProvider with jvmId = 901 26/03/27 08:16:35 INFO DriverCorral: DBFS health check ok 26/03/27 08:16:35 INFO HiveMetaStore: 0: get_database: default 26/03/27 08:16:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 26/03/27 08:16:36 INFO DriverCorral: Metastore health check ok 26/03/27 08:16:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:17:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:17:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:17:42 INFO ProgressReporter$: [600 occurrences] Reporting partial results for running commands: 1774596962274_6771957924431831931_job-718792142532376-run-54758330324283-action-5529254552433704 26/03/27 08:17:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:18:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:18:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:18:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:19:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:19:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:19:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:20:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:20:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:20:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:21:17 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:21:17 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 193 milliseconds) 26/03/27 08:21:17 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeuropec2-prod-metastore-1.mysql.database.azure.com:9207/organization957157258232692?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Starting... 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Start completed. 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Shutdown initiated... 26/03/27 08:21:17 INFO HikariDataSource: metastore-monitor - Shutdown completed. 26/03/27 08:21:17 INFO MetastoreMonitor: PoPProxy healthcheck successful (connection duration = 176 milliseconds) 26/03/27 08:21:19 INFO DeadlockDetector: Requested deadlock detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:21:19 INFO HangingThreadDetector: Requested hanging thread detection caused by: DAG_SCHEDULER_NO_ACTIVE_JOB 26/03/27 08:21:29 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:21:35 INFO DatabricksILoop$: Received SAFEr configs with version 1774596773376 26/03/27 08:21:35 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the delegate. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3n. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasbs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme gs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme s3a. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme r2. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfss. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme abfs. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO HadoopFSUtil$: Installing the old filesystem implementation for scheme wasb. Current value: com.databricks.sql.acl.fs.CredentialScopeFileSystem. Old value to be restored: com.databricks.sql.io.LokiFileSystem. 26/03/27 08:21:35 INFO DatabricksFileSystemV2Factory: Creating abfss file system for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net 26/03/27 08:21:35 INFO AzureBlobFileSystem:V3: Initializing AzureBlobFileSystem for abfss://REDACTED_CREDENTIALS(63a9f0ea)@dbstoragepyw5jx2slb6h2.dfs.core.windows.net/957157258232692 with credential = FixedSASTokenProvider with jvmId = 901 26/03/27 08:21:35 INFO DriverCorral: DBFS health check ok 26/03/27 08:21:35 INFO HiveMetaStore: 0: get_database: default 26/03/27 08:21:35 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 26/03/27 08:21:36 INFO DriverCorral: Metastore health check ok 26/03/27 08:21:40 INFO ProgressReporter$: Removed result fetcher for 1774596962274_6771957924431831931_job-718792142532376-run-54758330324283-action-5529254552433704 26/03/27 08:21:46 INFO SharedState: Maintenance auto compute client is enabled. 26/03/27 08:21:46 INFO MACBackgroundThread: Expected non-empty token for a synchronous push. 26/03/27 08:21:46 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:46 INFO ProgressReporter$: Added result fetcher for 1774596962273_5789661616817877326_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:46 INFO ProgressReporter$: Removed result fetcher for 1774596962273_5789661616817877326_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:46 INFO DatabricksStreamingQueryListener: Command execution complete on Driver, no active Streaming query runs associated with command 1774596962273_5789661616817877326_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:46 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:46 INFO ProgressReporter$: Added result fetcher for 1774596962273_5391691400952360127_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:46 INFO ProgressReporter$: Removed result fetcher for 1774596962273_5391691400952360127_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:46 INFO DatabricksStreamingQueryListener: Command execution complete on Driver, no active Streaming query runs associated with command 1774596962273_5391691400952360127_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:47 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:47 INFO ProgressReporter$: Added result fetcher for 1774596962273_5749136870944699242_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:47 INFO ProgressReporter$: [5 occurrences] Reporting progress for running commands: 1774596962273_5749136870944699242_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:47 INFO ProgressReporter$: Removed result fetcher for 1774596962273_5749136870944699242_run-54758330324283-ec-5257567151664308602-log-command 26/03/27 08:21:49 INFO ReplManagerImpl: REPL ReplId-19d2e-38b7e-3 finished addReplToExecutionContext (1) 26/03/27 08:21:49 INFO ColdStartReplFactory: Cold-start repl creation for: scala, ReplInfo(driverReplId=ReplId-19d2e-38b7e-3, chauffeurReplId=ReplId-19d2e-38b7e-3, executionContextId=Some(ExecutionContextIdV2(1)), lazyInfoInitialized=true) 26/03/27 08:21:49 INFO ReplOuterWrapper: DriverWrapper created for REPL ReplInfo(driverReplId=ReplId-19d2e-38b7e-3, chauffeurReplId=ReplId-19d2e-38b7e-3, executionContextId=Some(ExecutionContextIdV2(1)), lazyInfoInitialized=true) 26/03/27 08:21:49 INFO ReplManagerImpl: Adding repl ReplId-19d2e-38b7e-3 to execution context 1 (2) 26/03/27 08:21:49 INFO ReplManagerImpl: StartReplOptions: 26/03/27 08:21:49 INFO ReplManagerImpl: Skipping snapshot restore for REPL ReplId-19d2e-38b7e-3 because language is scala (only Python REPLs support snapshot restore) 26/03/27 08:21:49 INFO ScalaDriverWrapper: REPL status transitioned to Starting ReplId-19d2e-38b7e-3 (3) 26/03/27 08:21:49 INFO ScalaDriverWrapper: Starting ReplId-19d2e-38b7e-3 - driverStatus transitioned to Starting (4) 26/03/27 08:21:49 INFO ScalaDriverWrapper: Starting Scala DriverLocal for REPL ReplId-19d2e-38b7e-3 (5 - 0) 26/03/27 08:21:50 INFO ScalaDriverWrapper: Driver instantiated for ReplId-19d2e-38b7e-3 (6) 26/03/27 08:21:50 INFO ScalaDriverWrapper: REPL started but is idle: ReplInfo(driverReplId=ReplId-19d2e-38b7e-3, chauffeurReplId=ReplId-19d2e-38b7e-3, executionContextId=Some(ExecutionContextIdV2(1)), lazyInfoInitialized=true) (7) 26/03/27 08:21:50 INFO ScalaDriverWrapper: setupRepl:ReplInfo(driverReplId=ReplId-19d2e-38b7e-3, chauffeurReplId=ReplId-19d2e-38b7e-3, executionContextId=Some(ExecutionContextIdV2(1)), lazyInfoInitialized=true): finished to load 26/03/27 08:21:50 INFO ScalaDriverWrapper: Finished setting-up REPL ReplId-19d2e-38b7e-3, accepting commands (8) 26/03/27 08:21:50 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:50 INFO ProgressReporter$: Added result fetcher for 1774596962275_5665917733737994731_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:50 INFO SignalUtils: Registering signal handler for INT 26/03/27 08:21:50 WARN DriverILoop: Could not determine classpath from classloader of type class jdk.internal.loader.ClassLoaders$AppClassLoader 26/03/27 08:21:50 INFO DriverILoop: Set class prefix to: scala.tools.nsc.interpreter.Naming$sessionNames$@3e658960 26/03/27 08:21:50 INFO DriverILoop: set ContextClassLoader 26/03/27 08:21:51 INFO DriverILoop: initialized intp 26/03/27 08:21:51 INFO PythonDriverLocalBase$RedirectThread: Python RedirectThread exit 26/03/27 08:21:51 INFO PythonDriverLocalBase$RedirectThread: Python RedirectThread exit 26/03/27 08:21:51 INFO ReplCrashUtils$: python shell exit code: 143; replId: ReplId-19d2e-38b7e-2, pid: 1819 26/03/27 08:21:51 INFO PythonDriverWrapper: Stopping main loop for REPL ReplId-19d2e-38b7e-2 26/03/27 08:21:51 INFO ReplManagerImpl: ReplInfo(driverReplId=ReplId-19d2e-38b7e-2, chauffeurReplId=ReplId-19d2e-38b7e-2, executionContextId=Some(ExecutionContextIdV2(5257567151664308602)), lazyInfoInitialized=true) successfully discarded 26/03/27 08:21:52 INFO MlflowAutologEventPublisher$: Subscriber with repl ID ReplId-19d2e-38b7e-2 not responding to health checks, removing it 26/03/27 08:21:55 INFO ProgressReporter$: Removed result fetcher for 1774596962275_5665917733737994731_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:55 INFO DatabricksStreamingQueryListener: Command execution complete on Driver, no active Streaming query runs associated with command 1774596962275_5665917733737994731_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:55 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:55 INFO ProgressReporter$: Added result fetcher for 1774596962275_8894045962250815979_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:55 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstoragepyw5jx2slb6h2.blob.core.windows.net/jobs/957157258232692/FileStore/job-result/job-718792142532376/run-54758330324283/dbt-output.tar.gz] Presigned URL: Started uploading file using AzureSasUri upload_type=AzureSasUri operation_id=PresignedUrlUpload-883649e44d225e36 26/03/27 08:21:55 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstoragepyw5jx2slb6h2.blob.core.windows.net/jobs/957157258232692/FileStore/job-result/job-718792142532376/run-54758330324283/dbt-output.tar.gz] executeHttpRequest finished with status code 201; numRetries = 0 upload_type=AzureSasUri operation_id=PresignedUrlUpload-883649e44d225e36 part_index=0 part_position=0 part_length=183154 uuid_base64_encoded=YWVhYjY4YzUtODEwMS00ZGRiLWJmYTktOWM5MjJmMmI1ODAx upload_step=put_block method=PUT 26/03/27 08:21:55 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstoragepyw5jx2slb6h2.blob.core.windows.net/jobs/957157258232692/FileStore/job-result/job-718792142532376/run-54758330324283/dbt-output.tar.gz] executeHttpRequest finished with status code 201; numRetries = 0 upload_type=AzureSasUri operation_id=PresignedUrlUpload-883649e44d225e36 upload_step=put_block_list method=PUT 26/03/27 08:21:55 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstoragepyw5jx2slb6h2.blob.core.windows.net/jobs/957157258232692/FileStore/job-result/job-718792142532376/run-54758330324283/dbt-output.tar.gz] Presigned URL: Successfully uploaded file using AzureSasUri upload_type=AzureSasUri operation_id=PresignedUrlUpload-883649e44d225e36 26/03/27 08:21:55 INFO ProgressReporter$: Removed result fetcher for 1774596962275_8894045962250815979_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:55 INFO DatabricksStreamingQueryListener: Command execution complete on Driver, no active Streaming query runs associated with command 1774596962275_8894045962250815979_run-54758330324283-ec-5257567151664308602-upload-artifacts 26/03/27 08:21:56 INFO SparkAdapter: Successfully added uid 0 to driver local context 26/03/27 08:21:56 INFO ProgressReporter$: Added result fetcher for 1774596962273_6456912772805468204_run-54758330324283-ec-5257567151664308602-cleanup-tmp-dir 26/03/27 08:21:56 INFO ProgressReporter$: Removed result fetcher for 1774596962273_6456912772805468204_run-54758330324283-ec-5257567151664308602-cleanup-tmp-dir 26/03/27 08:21:56 INFO DatabricksStreamingQueryListener: Command execution complete on Driver, no active Streaming query runs associated with command 1774596962273_6456912772805468204_run-54758330324283-ec-5257567151664308602-cleanup-tmp-dir 26/03/27 08:21:58 INFO StandaloneSchedulerBackend$StandaloneDriverEndpoint: Executor 0 pending removal: cluster termination 26/03/27 08:21:58 INFO ShutdownHookManager$: Running shutdown hook defined at com.databricks.backend.daemon.driver.DriverDaemon 26/03/27 08:21:58 INFO DriverDaemon: Run shutdown hook for driver corral backend 26/03/27 08:21:58 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20260327073620-0000/0 is now LOST (worker lost: 10.102.48.13:32817 got disassociated) 26/03/27 08:21:58 INFO StandaloneSchedulerBackend: Executor app-20260327073620-0000/0 removed: worker lost: 10.102.48.13:32817 got disassociated 26/03/27 08:21:58 INFO TempTableCleaner$: Cleaning up session-local temporary tables for all execution contexts 26/03/27 08:21:58 ERROR TaskSchedulerImpl: Lost executor 0 on 10.102.48.13: cluster termination 26/03/27 08:21:58 INFO StandaloneAppClient$ClientEndpoint: Master removed worker worker-20260327073606-10.102.48.13-32817: 10.102.48.13:32817 got disassociated 26/03/27 08:21:58 INFO StandaloneSchedulerBackend: Worker worker-20260327073606-10.102.48.13-32817 removed: 10.102.48.13:32817 got disassociated 26/03/27 08:21:58 INFO DAGScheduler: Executor lost: 0 (epoch 0) 26/03/27 08:21:58 INFO BlockManagerMaster: Removal of executor 0 requested 26/03/27 08:21:58 INFO TaskSchedulerImpl: Handle removed worker worker-20260327073606-10.102.48.13-32817: 10.102.48.13:32817 got disassociated 26/03/27 08:21:58 INFO BlockManagerMasterEndpoint: Trying to remove executor 0 from BlockManagerMaster. 26/03/27 08:21:58 INFO DBCEventLoggingListener: Rolling event log; numTimesRolledOver = 1 26/03/27 08:21:58 INFO DAGScheduler: Shuffle files lost for worker worker-20260327073606-10.102.48.13-32817 on host 10.102.48.13 26/03/27 08:21:58 INFO DBCEventLoggingListener: Rolled active log file /databricks/driver/eventlogs/1040499146886525206/eventlog to /databricks/driver/eventlogs/1040499146886525206/eventlog-2026-03-27--07-40, size = 694351 26/03/27 08:21:58 INFO TempTableCleaner$: Finished cleaning up session-local temporary tables 26/03/27 08:21:58 INFO ShutdownHookManager$: Completed shutdown hook defined at com.databricks.backend.daemon.driver.DriverDaemon 26/03/27 08:21:58 INFO ShutdownHookManager$: Running shutdown hook defined at com.databricks.DatabricksMain.initDatabricks 26/03/27 08:21:58 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(0, 10.102.48.13, 40227, None) 26/03/27 08:21:58 INFO BlockManagerMasterEndpoint: Removed 0 successfully in removeExecutor 26/03/27 08:21:58 INFO DBCEventLoggingListener: Logging events to eventlogs/1040499146886525206/eventlog 26/03/27 08:21:58 INFO ShutdownHookManager$: Completed shutdown hook defined at com.databricks.DatabricksMain.initDatabricks 26/03/27 08:21:59 INFO ShutdownHookManager$: Running shutdown hook defined at com.databricks.DatabricksMain.initDatabricks 26/03/27 08:21:59 INFO DrainingState: Started draining: min wait 10000, grace period 5000, max wait 15000. 26/03/27 08:21:59 INFO Storelet[]:: [Storelet.scala:1347] [every=30s] Received performOperations for namespace unified-softstore-metadata and key 26/03/27 08:21:59 INFO AsyncEventQueue: Process of event SparkListenerExecutorRemoved(1774599718551,0,{"cause":"cluster termination","detectionMechanism":null}) bylistener org.apache.spark.status.AppStatusListener took 1278.821068ms. 26/03/27 08:21:59 INFO DBCEventLoggingListener: Compressed rolled file /databricks/driver/eventlogs/1040499146886525206/eventlog-2026-03-27--07-40 to /databricks/driver/eventlogs/1040499146886525206/eventlog-2026-03-27--07-40.gz in 1273ms, size = 104113 26/03/27 08:21:59 INFO DBCEventLoggingListener: Deleted rolled file eventlogs/1040499146886525206/eventlog-2026-03-27--07-40