2021/06/01 16:19:08 Starting App Insight Logger for task: runTaskLet 2021/06/01 16:19:08 Attempt 1 of http call to http://10.0.0.5:16384/sendlogstoartifacts/info 2021/06/01 16:19:08 Attempt 1 of http call to http://10.0.0.5:16384/sendlogstoartifacts/status [2021-06-01T16:19:08.544457] Entering context manager injector. [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'Dataset:context_managers.Datasets', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError'], invocation=['urldecode_invoker.py', 'python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', 'DatasetOutputConfig:Evaluation_results', '--scored-dataset=DatasetConsumptionConfig:Scored_dataset']) Initialize DatasetContextManager. Script type = None Set Dataset Scored_dataset's target path to /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3 Enter __enter__ of DatasetContextManager SDK version: azureml-core==1.26.0 azureml-dataprep==2.13.2. Session id: 02e79cef-2f9c-4364-9fa4-3e84ea3eeb52. Run id: 01d28a65-8b6f-43e7-83da-578ccf8ce970. Processing 'Scored_dataset'. 2021/06/01 16:19:13 Not exporting to RunHistory as the exporter is either stopped or there is no data. Stopped: false OriginalData: 1 FilteredData: 0. Processing dataset FileDataset { "source": [ "('workspaceblobstore', 'azureml/53570cd0-8025-4f6c-a3c4-cc83ea11076d/Results_dataset')" ], "definition": [ "GetDatastoreFiles" ], "registration": { "id": "70da02af-7ea8-413b-9037-a87d224fedc3", "name": null, "version": null, "workspace": "Workspace.create(name='', subscription_id='', resource_group='')" } } Mounting Scored_dataset to /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3. Mounted Scored_dataset to /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3 as folder. Processing 'Evaluation_results'. Exit __enter__ of DatasetContextManager [2021-06-01T16:19:28.005650] Entering Run History Context Manager. [2021-06-01T16:19:28.157304] Current directory: /mnt/batch/tasks/shared/LS_root/jobs/azureml/01d28a65-8b6f-43e7-83da-578ccf8ce970/mounts/workspaceblobstore/azureml/01d28a65-8b6f-43e7-83da-578ccf8ce970 [2021-06-01T16:19:28.157531] Preparing to call script [urldecode_invoker.py] with arguments:['python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', '$Evaluation_results', '--scored-dataset=$Scored_dataset'] [2021-06-01T16:19:28.157614] After variable expansion, calling script [urldecode_invoker.py] with arguments:['python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', '/tmp/Evaluation_results_workspaceblobstore', '--scored-dataset=/tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3'] Session_id = 02e79cef-2f9c-4364-9fa4-3e84ea3eeb52 Invoking module by urldecode_invoker 0.0.8. Module type: official module. Using runpy to invoke module 'azureml.studio.modulehost.module_invoker'. 2021-06-01 16:19:28,769 studio.modulehost INFO Reset logging level to DEBUG 2021-06-01 16:19:28,769 studio.modulehost INFO Load pyarrow.parquet explicitly: 2021-06-01 16:19:28,769 studio.core INFO execute_with_cli - Start: 2021-06-01 16:19:28,769 studio.modulehost INFO | ALGHOST 0.0.153 2021-06-01 16:19:29,494 studio.modulehost INFO | CLI arguments parsed: {'module_name': 'azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', 'OutputPortsInternal': {'Evaluation results': '/tmp/Evaluation_results_workspaceblobstore'}, 'InputPortsInternal': {'Scored dataset': '/tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3'}} 2021-06-01 16:19:29,502 studio.modulehost INFO | Invoking ModuleEntry(azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module; EvaluateModelModule; run) 2021-06-01 16:19:29,502 studio.core DEBUG | Input Ports: 2021-06-01 16:19:29,502 studio.core DEBUG | | Scored dataset = 2021-06-01 16:19:29,502 studio.core DEBUG | Output Ports: 2021-06-01 16:19:29,502 studio.core DEBUG | | Evaluation results = /tmp/Evaluation_results_workspaceblobstore 2021-06-01 16:19:29,502 studio.core DEBUG | Parameters: 2021-06-01 16:19:29,502 studio.core DEBUG | | (empty) 2021-06-01 16:19:29,503 studio.core DEBUG | Environment Variables: 2021-06-01 16:19:29,503 studio.core DEBUG | | AZUREML_DATAREFERENCE_Scored_dataset = /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3 2021-06-01 16:19:29,503 studio.core INFO | Reflect input ports and parameters - Start: 2021-06-01 16:19:29,503 studio.core INFO | | Handle input port "Scored dataset" - Start: 2021-06-01 16:19:29,503 studio.core INFO | | | Mount/Download dataset to '/tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3' - Start: 2021-06-01 16:19:29,503 studio.modulehost DEBUG | | | | Content of directory /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3: 2021-06-01 16:19:29,770 studio.modulehost DEBUG | | | | | _meta.yaml 2021-06-01 16:19:29,770 studio.modulehost DEBUG | | | | | _samples.json 2021-06-01 16:19:29,770 studio.modulehost DEBUG | | | | | data.dataset 2021-06-01 16:19:29,771 studio.modulehost DEBUG | | | | | data.dataset.parquet 2021-06-01 16:19:29,771 studio.modulehost DEBUG | | | | | data.visualization 2021-06-01 16:19:29,793 studio.modulehost DEBUG | | | | | schema/_schema.json 2021-06-01 16:19:29,793 studio.core INFO | | | Mount/Download dataset to '/tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3' - End with 0.2899s elapsed. 2021-06-01 16:19:29,794 studio.core INFO | | | Try to read from /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3 via meta - Start: Downloaded path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/azureml/53570cd0-8025-4f6c-a3c4-cc83ea11076d/Results_dataset/_meta.yaml is different from target path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/_meta.yaml Downloaded path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/azureml/53570cd0-8025-4f6c-a3c4-cc83ea11076d/Results_dataset/schema/_schema.json is different from target path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/schema/_schema.json Downloaded path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/azureml/53570cd0-8025-4f6c-a3c4-cc83ea11076d/Results_dataset/data.dataset.parquet is different from target path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/data.dataset.parquet Downloaded path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/azureml/53570cd0-8025-4f6c-a3c4-cc83ea11076d/Results_dataset/data.dataset is different from target path: /tmp/tmp56gk_zm9/272ccf8f-ba5f-413a-b3b5-b62b0bcb05eb/data.dataset 2021-06-01 16:19:30,807 studio.common INFO | | | | Load DataTableMeta successfully, path=data.dataset 2021-06-01 16:19:30,842 studio.common INFO | | | | Load meta data from directory successfully, data=DataFrameDirectory(meta={'type': 'DataFrameDirectory', 'visualization': [{'type': 'Visualization', 'path': 'data.visualization'}], 'extension': {'DataTableMeta': 'data.dataset'}, 'format': 'Parquet', 'data': 'data.dataset.parquet', 'samples': '_samples.json', 'schema': 'schema/_schema.json'}), type= 2021-06-01 16:19:30,846 studio.core INFO | | | Try to read from /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3 via meta - End with 1.0523s elapsed. 2021-06-01 16:19:30,846 studio.core INFO | | Handle input port "Scored dataset" - End with 1.3434s elapsed. 2021-06-01 16:19:30,846 studio.core INFO | | Handle input port "Scored dataset to compare" - Start: 2021-06-01 16:19:30,847 studio.modulehost WARNING | | | File 'None' does not exist. 2021-06-01 16:19:30,847 studio.core INFO | | Handle input port "Scored dataset to compare" - End with 0.0001s elapsed. 2021-06-01 16:19:30,847 studio.core INFO | Reflect input ports and parameters - End with 1.3438s elapsed. 2021-06-01 16:19:30,847 studio.core INFO | EvaluateModelModule.run - Start: 2021-06-01 16:19:30,847 studio.core DEBUG | | kwargs: 2021-06-01 16:19:30,847 studio.core DEBUG | | | scored_data = 2021-06-01 16:19:30,847 studio.core DEBUG | | | scored_data_to_compare = None 2021-06-01 16:19:30,847 studio.core DEBUG | | validated_args: 2021-06-01 16:19:30,847 studio.core DEBUG | | | scored_data = 2021-06-01 16:19:30,847 studio.core DEBUG | | | scored_data_to_compare = None 2021-06-01 16:19:30,847 studio.module INFO | | Validate input data (Scored Data). 2021-06-01 16:19:30,848 studio.module INFO | | Get a TaskType.Cluster Model Scored Data from InputPort1. 2021-06-01 16:19:30,848 studio.module INFO | | Scored data from InputPort1 has 29533 Row(s) and 259 Columns. 2021-06-01 16:19:30,849 studio.module INFO | | Validated input data. 2021-06-01 16:19:30,849 studio.module DEBUG | | Use Clustering Metric. 2021-06-01 16:19:30,849 studio.core INFO | | Evaluate Scored Data - Start: 2021-06-01 16:19:30,849 studio.module INFO | | | Evaluate data set with None as Label Column. 2021-06-01 16:19:30,946 studio.core INFO | | Evaluate Scored Data - End with 0.0974s elapsed. 2021-06-01 16:19:30,946 studio.core INFO | EvaluateModelModule.run - End with 0.0994s elapsed. 2021-06-01 16:19:30,946 studio.modulehost INFO | Set error info in module statistics 2021-06-01 16:19:30,946 studio.core INFO | Logging exception information of module execution - Start: 2021-06-01 16:19:30,947 studio.modulehost INFO | | Session_id = 02e79cef-2f9c-4364-9fa4-3e84ea3eeb52 2021-06-01 16:19:30,947 studio.core INFO | | ModuleStatistics.log_stack_trace_telemetry - Start: 2021-06-01 16:19:31,711 studio.core INFO | | ModuleStatistics.log_stack_trace_telemetry - End with 0.7645s elapsed. 2021-06-01 16:19:31,712 studio.modulehost ERROR | | Get ModuleError when invoking ModuleEntry(azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module; EvaluateModelModule; run) Traceback (most recent call last): File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 379, in exec output_tuple = self._entry.func(**reflected_input_ports, **reflected_parameters) > reflected_input_ports = {'scored_data': , 'scored_data_to_compare': None} > reflected_parameters = {} > self = File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 76, in wrapper ret = func(*args, **validated_args) > func = > args = () > validated_args = {'scored_data': , 'scored_data_to_compare': None} File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 57, in run output_values = EvaluateModelModule.evaluate_generic(**input_values) > input_values = {'scored_data_to_compare': None, 'scored_data': , 'input_values': {...}} File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 201, in evaluate_generic output_metrics=output_metrics) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 65, in evaluate_data return self._evaluate(df, meta_data=scored_data.meta_data, output_metrics=output_metrics) > df = AC_MODEL ... DistancesToClusterCenter no.144 | 0 CRJ700 ... 2.043172 | 1 CRJ ... 1.806005 | ... (omitted 9 lines) ... | | [29533 rows x 259 columns] > self = > output_metrics = True > scored_data = File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 645, in _evaluate scored_data = self._get_all_scored_data(data_frame=data_frame, meta_data=meta_data) > self = > data_frame = AC_MODEL ... DistancesToClusterCenter no.144 | 0 CRJ700 ... 2.043172 | 1 CRJ ... 1.806005 | ... (omitted 9 lines) ... | | [29533 rows x 259 columns] > meta_data = File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 606, in _get_all_scored_data reason=f"not exist corresponding distance metrics column " File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/common/error.py", line 827, in throw raise err > err = InvalidDatasetError('Scored dataset contains invalid data, not exist corresponding distance metrics column for cluster 31.0 found in Assignments..',) InvalidDatasetError: Scored dataset contains invalid data, not exist corresponding distance metrics column for cluster 31.0 found in Assignments.. 2021-06-01 16:19:32,070 studio.core INFO | Logging exception information of module execution - End with 1.1238s elapsed. 2021-06-01 16:19:32,071 studio.core INFO | ModuleStatistics.save_to_azureml - Start: 2021-06-01 16:19:32,311 studio.core INFO | ModuleStatistics.save_to_azureml - End with 0.2399s elapsed. 2021-06-01 16:19:32,311 studio.core INFO execute_with_cli - End with 3.5413s elapsed. [2021-06-01T16:19:32.312707] The experiment failed. Finalizing run... Cleaning up all outstanding Run operations, waiting 900.0 seconds 3 items cleaning up... Cleanup took 0.2259521484375 seconds Enter __exit__ of DatasetContextManager Unmounting /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3. Finishing unmounting /tmp/Scored_dataset_70da02af-7ea8-413b-9037-a87d224fedc3. Uploading output 'Evaluation_results'. Exit __exit__ of DatasetContextManager Traceback (most recent call last): File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_invoker.py", line 7, in execute(sys.argv) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 41, in execute return execute_with_cli(original_args) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/core/logger.py", line 209, in wrapper ret = func(*args, **kwargs) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 52, in execute_with_cli do_execute_with_env(parser, FolderRuntimeEnv()) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 68, in do_execute_with_env module_statistics_folder=parser.module_statistics_folder File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 397, in exec self._handle_exception(bex) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 471, in _handle_exception raise exception File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 379, in exec output_tuple = self._entry.func(**reflected_input_ports, **reflected_parameters) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 76, in wrapper ret = func(*args, **validated_args) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 57, in run output_values = EvaluateModelModule.evaluate_generic(**input_values) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 201, in evaluate_generic output_metrics=output_metrics) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 65, in evaluate_data return self._evaluate(df, meta_data=scored_data.meta_data, output_metrics=output_metrics) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 645, in _evaluate scored_data = self._get_all_scored_data(data_frame=data_frame, meta_data=meta_data) File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/modules/ml/initialize_models/evaluator.py", line 606, in _get_all_scored_data reason=f"not exist corresponding distance metrics column " File "/azureml-envs/azureml_cd11334514d8a3a37a717082af59b9da/lib/python3.6/site-packages/azureml/studio/common/error.py", line 827, in throw raise err azureml.studio.common.error.InvalidDatasetError: Scored dataset contains invalid data, not exist corresponding distance metrics column for cluster 31.0 found in Assignments.. [2021-06-01T16:19:33.459890] Finished context manager injector with Exception. 2021/06/01 16:19:39 Skipping parsing control script error. Reason: Error json file doesn't exist. This most likely means that no errors were written to the file. File path: /mnt/batch/tasks/workitems/ee3ff989-d5e2-4aef-95eb-9f1eca135b64/job-1/01d28a65-8b6f-43e7-8_c608970d-6d10-42f2-8404-73be02ed4d9a/wd/runTaskLetTask_error.json 2021/06/01 16:19:39 Failed to run the wrapper cmd with err: exit status 1 2021/06/01 16:19:39 Attempt 1 of http call to http://10.0.0.5:16384/sendlogstoartifacts/status 2021/06/01 16:19:39 mpirun version string: { Intel(R) MPI Library for Linux* OS, Version 2018 Update 3 Build 20180411 (id: 18329) Copyright 2003-2018 Intel Corporation. } 2021/06/01 16:19:39 MPI publisher: intel ; version: 2018 2021/06/01 16:19:39 Not exporting to RunHistory as the exporter is either stopped or there is no data. Stopped: false OriginalData: 2 FilteredData: 0. 2021/06/01 16:19:39 Process Exiting with Code: 1 2021/06/01 16:19:39 All App Insights Logs was send successfully