savng a model with 20% return

This commit is contained in:
2025-02-06 00:25:18 +00:00
parent f239ff48a9
commit 8a7f5a7a8f
10 changed files with 598 additions and 997 deletions

View File

@@ -292,3 +292,187 @@
2025-02-02 01:34:10,572 - INFO - Final DQN agent trained and saved. 2025-02-02 01:34:10,572 - INFO - Final DQN agent trained and saved.
2025-02-02 01:34:10,572 - INFO - Running final inference with the trained DQN model... 2025-02-02 01:34:10,572 - INFO - Running final inference with the trained DQN model...
2025-02-02 02:00:51,337 - INFO - Final inference completed. Results logged and displayed. 2025-02-02 02:00:51,337 - INFO - Final inference completed. Results logged and displayed.
2025-02-02 04:22:30,495 - INFO - ===== Resource Statistics =====
2025-02-02 04:22:30,495 - INFO - Physical CPU Cores: 28
2025-02-02 04:22:30,495 - INFO - Logical CPU Cores: 56
2025-02-02 04:22:30,495 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-02-02 04:22:30,495 - INFO - No GPUs detected.
2025-02-02 04:22:30,496 - INFO - =================================
2025-02-02 04:22:30,496 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-02-02 04:22:30,497 - INFO - Loading data from: BAT.csv
2025-02-02 04:22:31,706 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-02-02 04:22:31,723 - INFO - Data loaded and sorted successfully.
2025-02-02 04:22:31,723 - INFO - Calculating technical indicators...
2025-02-02 04:22:31,762 - INFO - Technical indicators calculated successfully.
2025-02-02 04:22:31,773 - INFO - Starting parallel feature engineering with 54 workers...
2025-02-02 04:22:41,355 - INFO - Parallel feature engineering completed.
2025-02-02 04:22:41,450 - INFO - Scaled training features shape: (14134, 15, 17)
2025-02-02 04:22:41,450 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-02-02 04:22:41,450 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-02-02 04:22:41,450 - INFO - Scaled training target shape: (14134,)
2025-02-02 04:22:41,451 - INFO - Scaled validation target shape: (3028,)
2025-02-02 04:22:41,451 - INFO - Scaled testing target shape: (3030,)
2025-02-02 04:22:41,451 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
2025-02-02 05:54:22,283 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.10939724730272588, 'learning_rate': 0.004011897756521068, 'optimizer': 'Nadam', 'decay': 8.424197400903806e-05}
2025-02-02 05:54:22,647 - INFO - Training best LSTM model with optimized hyperparameters...
2025-02-02 07:04:15,039 - INFO - Evaluating final LSTM model...
2025-02-02 07:04:16,388 - INFO - Test MSE: 0.0755
2025-02-02 07:04:16,388 - INFO - Test RMSE: 0.2747
2025-02-02 07:04:16,389 - INFO - Test MAE: 0.1741
2025-02-02 07:04:16,389 - INFO - Test R2 Score: 0.9937
2025-02-02 07:04:16,389 - INFO - Directional Accuracy: 0.4764
2025-02-02 07:04:16,740 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
2025-02-02 07:04:16,772 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
2025-02-02 07:04:16,773 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 08:36:57,319 - INFO - Agent achieved final net worth: $9495.28
2025-02-02 08:36:57,320 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-02 08:36:57,320 - INFO - Training DQN agent: Attempt 2 with hyperparameters: {'lr': 0.0009000000000000001, 'gamma': 0.95, 'exploration_fraction': 0.12000000000000001, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 10:11:36,927 - INFO - Agent achieved final net worth: $10000.00
2025-02-02 10:11:36,927 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-02 10:11:36,928 - INFO - Training DQN agent: Attempt 3 with hyperparameters: {'lr': 0.0008100000000000001, 'gamma': 0.95, 'exploration_fraction': 0.14, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 11:46:37,723 - INFO - Agent achieved final net worth: $10310.80
2025-02-02 11:46:37,723 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-02 11:46:37,723 - INFO - Training DQN agent: Attempt 4 with hyperparameters: {'lr': 0.000729, 'gamma': 0.95, 'exploration_fraction': 0.16, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 13:21:34,780 - INFO - Agent achieved final net worth: $10000.00
2025-02-02 13:21:34,781 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-02 13:21:34,781 - INFO - Training DQN agent: Attempt 5 with hyperparameters: {'lr': 0.0006561000000000001, 'gamma': 0.95, 'exploration_fraction': 0.18, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 14:56:26,542 - INFO - Agent achieved final net worth: $10000.00
2025-02-02 14:56:26,542 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-02 14:56:26,542 - INFO - Training DQN agent: Attempt 6 with hyperparameters: {'lr': 0.00059049, 'gamma': 0.95, 'exploration_fraction': 0.19999999999999998, 'buffer_size': 10000, 'batch_size': 64}
2025-02-02 16:30:32,804 - INFO - Agent achieved final net worth: $11327.79
2025-02-02 16:30:32,804 - INFO - Agent meets performance criteria!
2025-02-02 16:30:32,812 - INFO - Final DQN agent trained and saved.
2025-02-02 16:30:32,813 - INFO - Running final inference with the trained DQN model...
2025-02-02 16:57:15,181 - INFO - Final inference completed. Results logged and displayed.
2025-02-02 21:58:56,141 - INFO - ===== Resource Statistics =====
2025-02-02 21:58:56,141 - INFO - Physical CPU Cores: 28
2025-02-02 21:58:56,141 - INFO - Logical CPU Cores: 56
2025-02-02 21:58:56,142 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-02-02 21:58:56,142 - INFO - No GPUs detected.
2025-02-02 21:58:56,142 - INFO - =================================
2025-02-02 21:58:56,142 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-02-02 21:58:56,142 - INFO - Loading data from: BAT.csv
2025-02-02 21:58:57,362 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-02-02 21:58:57,379 - INFO - Data loaded and sorted successfully.
2025-02-02 21:58:57,379 - INFO - Calculating technical indicators...
2025-02-02 21:58:57,420 - INFO - Technical indicators calculated successfully.
2025-02-02 21:58:57,430 - INFO - Starting parallel feature engineering with 54 workers...
2025-02-02 21:59:06,743 - INFO - Parallel feature engineering completed.
2025-02-02 21:59:06,834 - INFO - Scaled training features shape: (14134, 15, 17)
2025-02-02 21:59:06,834 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-02-02 21:59:06,834 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-02-02 21:59:06,834 - INFO - Scaled training target shape: (14134,)
2025-02-02 21:59:06,834 - INFO - Scaled validation target shape: (3028,)
2025-02-02 21:59:06,834 - INFO - Scaled testing target shape: (3030,)
2025-02-02 21:59:06,835 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
2025-02-02 23:57:25,471 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.14233127439900994, 'learning_rate': 0.007677933084361605, 'optimizer': 'Nadam', 'decay': 3.588780523236025e-05}
2025-02-02 23:57:25,814 - INFO - Training best LSTM model with optimized hyperparameters...
2025-02-03 00:16:38,876 - INFO - Evaluating final LSTM model...
2025-02-03 00:16:40,253 - INFO - Test MSE: 0.0719
2025-02-03 00:16:40,253 - INFO - Test RMSE: 0.2681
2025-02-03 00:16:40,253 - INFO - Test MAE: 0.1711
2025-02-03 00:16:40,253 - INFO - Test R2 Score: 0.9940
2025-02-03 00:16:40,253 - INFO - Directional Accuracy: 0.4807
2025-02-03 00:16:40,618 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
2025-02-03 00:16:40,651 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
2025-02-03 00:16:40,651 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
2025-02-03 01:47:36,204 - INFO - Agent achieved final net worth: $10724.84
2025-02-03 01:47:36,205 - INFO - Agent meets performance criteria!
2025-02-03 01:47:36,213 - INFO - Final DQN agent trained and saved.
2025-02-03 01:47:36,213 - INFO - Running final inference with the trained DQN model...
2025-02-03 02:13:55,188 - INFO - Final inference completed. Results logged and displayed.
2025-02-04 04:30:46,502 - INFO - ===== Resource Statistics =====
2025-02-04 04:30:46,502 - INFO - Physical CPU Cores: 28
2025-02-04 04:30:46,503 - INFO - Logical CPU Cores: 56
2025-02-04 04:30:46,503 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-02-04 04:30:46,503 - INFO - No GPUs detected.
2025-02-04 04:30:46,503 - INFO - =================================
2025-02-04 04:30:46,503 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-02-04 04:30:46,504 - INFO - Loading data from: BAT.csv
2025-02-04 04:30:48,161 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-02-04 04:30:48,178 - INFO - Data loaded and sorted successfully.
2025-02-04 04:30:48,179 - INFO - Calculating technical indicators...
2025-02-04 04:30:48,218 - INFO - Technical indicators calculated successfully.
2025-02-04 04:30:48,228 - INFO - Starting parallel feature engineering with 54 workers...
2025-02-04 04:30:57,129 - INFO - Parallel feature engineering completed.
2025-02-04 04:30:57,219 - INFO - Scaled training features shape: (14134, 15, 17)
2025-02-04 04:30:57,219 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-02-04 04:30:57,220 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-02-04 04:30:57,220 - INFO - Scaled training target shape: (14134,)
2025-02-04 04:30:57,220 - INFO - Scaled validation target shape: (3028,)
2025-02-04 04:30:57,220 - INFO - Scaled testing target shape: (3030,)
2025-02-04 04:30:57,220 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
2025-02-04 05:45:32,152 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.10091819145228072, 'learning_rate': 0.005676435080613348, 'optimizer': 'Nadam', 'decay': 4.6972868473041675e-05}
2025-02-04 05:45:32,531 - INFO - Training best LSTM model with optimized hyperparameters...
2025-02-04 06:16:04,560 - INFO - Evaluating final LSTM model...
2025-02-04 06:16:06,066 - INFO - Test MSE: 0.0722
2025-02-04 06:16:06,066 - INFO - Test RMSE: 0.2687
2025-02-04 06:16:06,066 - INFO - Test MAE: 0.1702
2025-02-04 06:16:06,066 - INFO - Test R2 Score: 0.9940
2025-02-04 06:16:06,066 - INFO - Directional Accuracy: 0.4790
2025-02-04 06:16:06,418 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
2025-02-04 06:16:06,449 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
2025-02-04 06:16:06,450 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
2025-02-04 07:48:46,777 - INFO - Agent achieved final net worth: $14604.73
2025-02-04 07:48:46,777 - INFO - Agent meets performance criteria!
2025-02-04 07:48:46,785 - INFO - Final DQN agent trained and saved.
2025-02-04 07:48:46,786 - INFO - Running final inference with the trained DQN model...
2025-02-04 08:15:18,087 - INFO - Final inference completed. Results logged and displayed.
2025-02-04 18:22:47,361 - INFO - ===== Resource Statistics =====
2025-02-04 18:22:47,361 - INFO - Physical CPU Cores: 28
2025-02-04 18:22:47,361 - INFO - Logical CPU Cores: 56
2025-02-04 18:22:47,361 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 100.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-02-04 18:22:47,361 - INFO - No GPUs detected.
2025-02-04 18:22:47,362 - INFO - =================================
2025-02-04 18:22:47,362 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-02-04 18:22:47,362 - INFO - Loading data from: BAT.csv
2025-02-04 18:22:48,287 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-02-04 18:22:48,299 - INFO - Data loaded and sorted successfully.
2025-02-04 18:22:48,299 - INFO - Calculating technical indicators...
2025-02-04 18:22:48,329 - INFO - Technical indicators calculated successfully.
2025-02-04 18:22:48,337 - INFO - Starting parallel feature engineering with 54 workers...
2025-02-04 18:22:56,609 - INFO - Parallel feature engineering completed.
2025-02-04 18:22:56,700 - INFO - Scaled training features shape: (14134, 15, 17)
2025-02-04 18:22:56,701 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-02-04 18:22:56,701 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-02-04 18:22:56,701 - INFO - Scaled training target shape: (14134,)
2025-02-04 18:22:56,701 - INFO - Scaled validation target shape: (3028,)
2025-02-04 18:22:56,701 - INFO - Scaled testing target shape: (3030,)
2025-02-04 18:22:56,701 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
2025-02-04 19:14:27,217 - INFO - ===== Resource Statistics =====
2025-02-04 19:14:27,217 - INFO - Physical CPU Cores: 28
2025-02-04 19:14:27,218 - INFO - Logical CPU Cores: 56
2025-02-04 19:14:27,218 - INFO - CPU Usage per Core: [1.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-02-04 19:14:27,218 - INFO - No GPUs detected.
2025-02-04 19:14:27,218 - INFO - =================================
2025-02-04 19:14:27,218 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-02-04 19:14:27,218 - INFO - Loading data from: BAT.csv
2025-02-04 19:14:28,324 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-02-04 19:14:28,341 - INFO - Data loaded and sorted successfully.
2025-02-04 19:14:28,341 - INFO - Calculating technical indicators...
2025-02-04 19:14:28,379 - INFO - Technical indicators calculated successfully.
2025-02-04 19:14:28,389 - INFO - Starting parallel feature engineering with 54 workers...
2025-02-04 19:14:37,203 - INFO - Parallel feature engineering completed.
2025-02-04 19:14:37,302 - INFO - Scaled training features shape: (14134, 15, 17)
2025-02-04 19:14:37,303 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-02-04 19:14:37,303 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-02-04 19:14:37,303 - INFO - Scaled training target shape: (14134,)
2025-02-04 19:14:37,303 - INFO - Scaled validation target shape: (3028,)
2025-02-04 19:14:37,303 - INFO - Scaled testing target shape: (3030,)
2025-02-04 19:14:37,303 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
2025-02-04 20:42:04,298 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.11645207977952371, 'learning_rate': 0.00192817732348378, 'optimizer': 'Adam', 'decay': 8.867078075911043e-05}
2025-02-04 20:42:04,733 - INFO - Training best LSTM model with optimized hyperparameters...
2025-02-04 21:32:58,961 - INFO - Evaluating final LSTM model...
2025-02-04 21:33:00,167 - INFO - Test MSE: 0.0819
2025-02-04 21:33:00,167 - INFO - Test RMSE: 0.2862
2025-02-04 21:33:00,167 - INFO - Test MAE: 0.1825
2025-02-04 21:33:00,167 - INFO - Test R2 Score: 0.9932
2025-02-04 21:33:00,167 - INFO - Directional Accuracy: 0.4658
2025-02-04 21:33:00,492 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
2025-02-04 21:33:00,523 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
2025-02-04 21:33:00,523 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
2025-02-04 23:04:50,885 - INFO - Agent achieved final net worth: $12002.15
2025-02-04 23:04:50,886 - INFO - Agent meets performance criteria!
2025-02-04 23:04:50,893 - INFO - Final DQN agent trained and saved.
2025-02-04 23:04:50,893 - INFO - Running final inference with the trained DQN model...
2025-02-04 23:31:28,534 - INFO - Final inference completed. Results logged and displayed.

View File

@@ -1,324 +0,0 @@
(venv) kleinpanic@kleinpanic:~/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src$ py LSTMDQN.py BAT.csv
2025-01-31 00:33:45.402617: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1738283625.423731 635164 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1738283625.430264 635164 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-31 00:33:45.451539: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-31 00:33:51,246 - INFO - ===== Resource Statistics =====
2025-01-31 00:33:51,246 - INFO - Physical CPU Cores: 28
2025-01-31 00:33:51,246 - INFO - Logical CPU Cores: 56
2025-01-31 00:33:51,246 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-01-31 00:33:51,246 - INFO - No GPUs detected.
2025-01-31 00:33:51,247 - INFO - =================================
2025-01-31 00:33:51,247 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-01-31 00:33:51,247 - INFO - Loading data from: BAT.csv
2025-01-31 00:33:52,623 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-01-31 00:33:52,640 - INFO - Data loaded and sorted successfully.
2025-01-31 00:33:52,640 - INFO - Calculating technical indicators...
2025-01-31 00:33:52,680 - INFO - Technical indicators calculated successfully.
2025-01-31 00:33:52,690 - INFO - Starting parallel feature engineering with 54 workers...
2025-01-31 00:34:02,440 - INFO - Parallel feature engineering completed.
2025-01-31 00:34:02,527 - INFO - Scaled training features shape: (14134, 15, 17)
2025-01-31 00:34:02,527 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-01-31 00:34:02,527 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-01-31 00:34:02,527 - INFO - Scaled training target shape: (14134,)
2025-01-31 00:34:02,527 - INFO - Scaled validation target shape: (3028,)
2025-01-31 00:34:02,527 - INFO - Scaled testing target shape: (3030,)
2025-01-31 00:34:02,527 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
[I 2025-01-31 00:34:02,528] A new study created in memory with name: no-name-30abc2af-0d5d-4afc-9e51-0e6ab5344277
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:487: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)
2025-01-31 00:34:02.545693: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/keras/src/layers/rnn/bidirectional.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(**kwargs)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/keras/src/optimizers/base_optimizer.py:86: UserWarning: Argument `decay` is no longer supported and will be ignored.
warnings.warn(
[I 2025-01-31 01:47:50,865] Trial 25 finished with value: 0.0044469027779996395 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.41552068050266755, 'learning_rate': 0.0020464230384217887, 'optimizer': 'Nadam', 'decay': 4.152362979808315e-05}. Best is trial 25 with value: 0.0044469027779996395.
[I 2025-01-31 01:51:53,458] Trial 1 finished with value: 0.004896007943898439 and parameters: {'num_lstm_layers': 2, 'lstm_units': 32, 'dropout_rate': 0.26347160232211786, 'learning_rate': 0.005618445438864423, 'optimizer': 'Adam', 'decay': 9.002232128681866e-05}. Best is trial 25 with value: 0.0044469027779996395.
[I 2025-01-31 01:52:07,955] Trial 13 finished with value: 0.004379551392048597 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.1879612031755749, 'learning_rate': 0.00045486151574373985, 'optimizer': 'Adam', 'decay': 7.841076864183645e-05}. Best is trial 13 with value: 0.004379551392048597.
[I 2025-01-31 01:56:35,039] Trial 2 finished with value: 0.0035048779100179672 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18042015532719258, 'learning_rate': 0.008263668593877975, 'optimizer': 'Nadam', 'decay': 7.065697336348234e-05}. Best is trial 2 with value: 0.0035048779100179672.
[I 2025-01-31 01:59:49,276] Trial 8 finished with value: 0.004185597877949476 and parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.1225129824590411, 'learning_rate': 0.0032993925521966573, 'optimizer': 'Adam', 'decay': 7.453500347854662e-05}. Best is trial 2 with value: 0.0035048779100179672.
[I 2025-01-31 01:59:49,666] Trial 12 pruned. Trial was pruned at epoch 61.
[I 2025-01-31 01:59:57,670] Trial 7 pruned. Trial was pruned at epoch 60.
[I 2025-01-31 02:00:00,145] Trial 6 pruned. Trial was pruned at epoch 48.
[I 2025-01-31 02:00:02,845] Trial 0 pruned. Trial was pruned at epoch 53.
[I 2025-01-31 02:00:03,464] Trial 14 pruned. Trial was pruned at epoch 52.
[I 2025-01-31 02:00:08,618] Trial 20 pruned. Trial was pruned at epoch 41.
[I 2025-01-31 02:00:09,918] Trial 18 pruned. Trial was pruned at epoch 57.
[I 2025-01-31 02:00:18,111] Trial 11 pruned. Trial was pruned at epoch 48.
[I 2025-01-31 02:00:18,175] Trial 24 pruned. Trial was pruned at epoch 70.
[I 2025-01-31 02:00:24,035] Trial 19 pruned. Trial was pruned at epoch 71.
[I 2025-01-31 02:00:25,349] Trial 15 pruned. Trial was pruned at epoch 61.
[I 2025-01-31 02:00:28,094] Trial 21 pruned. Trial was pruned at epoch 53.
[I 2025-01-31 02:00:30,582] Trial 27 pruned. Trial was pruned at epoch 70.
[I 2025-01-31 02:00:34,584] Trial 16 pruned. Trial was pruned at epoch 54.
[I 2025-01-31 02:00:36,311] Trial 4 pruned. Trial was pruned at epoch 41.
[I 2025-01-31 02:00:36,943] Trial 10 pruned. Trial was pruned at epoch 58.
[I 2025-01-31 02:00:41,876] Trial 26 pruned. Trial was pruned at epoch 54.
[I 2025-01-31 02:00:42,253] Trial 5 pruned. Trial was pruned at epoch 54.
[I 2025-01-31 02:00:42,354] Trial 22 pruned. Trial was pruned at epoch 54.
[I 2025-01-31 02:01:21,394] Trial 17 pruned. Trial was pruned at epoch 63.
[I 2025-01-31 02:02:27,396] Trial 28 finished with value: 0.005718659609556198 and parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.256096112829434, 'learning_rate': 1.7863513392726302e-05, 'optimizer': 'Nadam', 'decay': 4.8981982638899195e-05}. Best is trial 2 with value: 0.0035048779100179672.
[I 2025-01-31 02:04:43,158] Trial 9 finished with value: 0.004240941721946001 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.13786769624978978, 'learning_rate': 0.00038368722697235065, 'optimizer': 'Nadam', 'decay': 5.219728457137628e-05}. Best is trial 2 with value: 0.0035048779100179672.
[I 2025-01-31 02:04:47,356] Trial 29 pruned. Trial was pruned at epoch 89.
[I 2025-01-31 02:04:58,802] Trial 23 finished with value: 0.004438518546521664 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.10170042323024542, 'learning_rate': 2.1295423006302236e-05, 'optimizer': 'Nadam', 'decay': 1.9256711241510017e-05}. Best is trial 2 with value: 0.0035048779100179672.
[I 2025-01-31 02:07:22,581] Trial 3 finished with value: 0.004468627739697695 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.2941741845859971, 'learning_rate': 0.00015534552759452507, 'optimizer': 'Adam', 'decay': 3.964121547616277e-05}. Best is trial 2 with value: 0.0035048779100179672.
2025-01-31 02:07:22,583 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18042015532719258, 'learning_rate': 0.008263668593877975, 'optimizer': 'Nadam', 'decay': 7.065697336348234e-05}
2025-01-31 02:07:22,887 - INFO - Training best LSTM model with optimized hyperparameters...
Epoch 1/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 22s 21ms/step - loss: 0.0176 - mae: 0.0468 - val_loss: 3.7775e-04 - val_mae: 0.0096 - learning_rate: 0.0083
Epoch 2/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 5.1259e-04 - mae: 0.0169 - val_loss: 5.0930e-04 - val_mae: 0.0269 - learning_rate: 0.0083
Epoch 3/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.5887e-04 - mae: 0.0161 - val_loss: 1.2987e-04 - val_mae: 0.0054 - learning_rate: 0.0083
Epoch 4/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.4157e-04 - mae: 0.0157 - val_loss: 1.4855e-04 - val_mae: 0.0068 - learning_rate: 0.0083
Epoch 5/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.3388e-04 - mae: 0.0151 - val_loss: 1.2859e-04 - val_mae: 0.0064 - learning_rate: 0.0083
Epoch 6/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.3468e-04 - mae: 0.0153 - val_loss: 1.3908e-04 - val_mae: 0.0086 - learning_rate: 0.0083
Epoch 7/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.0007e-04 - mae: 0.0139 - val_loss: 1.4985e-04 - val_mae: 0.0053 - learning_rate: 0.0083
Epoch 8/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 2.8960e-04 - mae: 0.0145 - val_loss: 1.0344e-04 - val_mae: 0.0059 - learning_rate: 0.0083
Epoch 9/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8612e-04 - mae: 0.0107 - val_loss: 1.2100e-04 - val_mae: 0.0089 - learning_rate: 0.0041
Epoch 10/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8470e-04 - mae: 0.0113 - val_loss: 1.4217e-04 - val_mae: 0.0115 - learning_rate: 0.0041
Epoch 11/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8239e-04 - mae: 0.0113 - val_loss: 7.3773e-05 - val_mae: 0.0052 - learning_rate: 0.0041
Epoch 12/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8133e-04 - mae: 0.0113 - val_loss: 8.1284e-05 - val_mae: 0.0063 - learning_rate: 0.0041
Epoch 13/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.6633e-04 - mae: 0.0107 - val_loss: 1.0878e-04 - val_mae: 0.0099 - learning_rate: 0.0041
Epoch 14/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2703e-04 - mae: 0.0095 - val_loss: 8.1036e-05 - val_mae: 0.0085 - learning_rate: 0.0021
Epoch 15/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2520e-04 - mae: 0.0093 - val_loss: 6.9320e-05 - val_mae: 0.0073 - learning_rate: 0.0021
Epoch 16/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2067e-04 - mae: 0.0092 - val_loss: 5.2056e-05 - val_mae: 0.0046 - learning_rate: 0.0021
Epoch 17/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.1662e-04 - mae: 0.0092 - val_loss: 8.4469e-05 - val_mae: 0.0092 - learning_rate: 0.0021
Epoch 18/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.1402e-04 - mae: 0.0092 - val_loss: 5.3823e-05 - val_mae: 0.0040 - learning_rate: 0.0021
Epoch 19/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.8560e-05 - mae: 0.0083 - val_loss: 4.5592e-05 - val_mae: 0.0051 - learning_rate: 0.0010
Epoch 20/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.8293e-05 - mae: 0.0082 - val_loss: 4.5364e-05 - val_mae: 0.0049 - learning_rate: 0.0010
Epoch 21/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.5821e-05 - mae: 0.0083 - val_loss: 4.0955e-05 - val_mae: 0.0042 - learning_rate: 0.0010
Epoch 22/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.5071e-05 - mae: 0.0079 - val_loss: 3.6926e-05 - val_mae: 0.0038 - learning_rate: 0.0010
Epoch 23/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.3654e-05 - mae: 0.0081 - val_loss: 4.7498e-05 - val_mae: 0.0061 - learning_rate: 0.0010
Epoch 24/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.7295e-05 - mae: 0.0076 - val_loss: 3.5652e-05 - val_mae: 0.0039 - learning_rate: 5.1648e-04
Epoch 25/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.1205e-05 - mae: 0.0077 - val_loss: 3.5340e-05 - val_mae: 0.0040 - learning_rate: 5.1648e-04
Epoch 26/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.9519e-05 - mae: 0.0076 - val_loss: 3.3783e-05 - val_mae: 0.0038 - learning_rate: 5.1648e-04
Epoch 27/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.3218e-05 - mae: 0.0078 - val_loss: 3.3893e-05 - val_mae: 0.0039 - learning_rate: 5.1648e-04
Epoch 28/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.2856e-05 - mae: 0.0078 - val_loss: 3.7778e-05 - val_mae: 0.0045 - learning_rate: 5.1648e-04
Epoch 29/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.1744e-05 - mae: 0.0076 - val_loss: 3.1605e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
Epoch 30/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3165e-05 - mae: 0.0072 - val_loss: 3.1850e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
Epoch 31/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.4117e-05 - mae: 0.0073 - val_loss: 3.1598e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
Epoch 32/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.8020e-05 - mae: 0.0076 - val_loss: 3.8364e-05 - val_mae: 0.0048 - learning_rate: 2.5824e-04
Epoch 33/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.2452e-05 - mae: 0.0073 - val_loss: 4.1319e-05 - val_mae: 0.0053 - learning_rate: 2.5824e-04
Epoch 34/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9039e-05 - mae: 0.0071 - val_loss: 3.2345e-05 - val_mae: 0.0041 - learning_rate: 1.2912e-04
Epoch 35/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0146e-05 - mae: 0.0072 - val_loss: 3.3009e-05 - val_mae: 0.0042 - learning_rate: 1.2912e-04
Epoch 36/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0245e-05 - mae: 0.0071 - val_loss: 3.1106e-05 - val_mae: 0.0041 - learning_rate: 1.2912e-04
Epoch 37/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0257e-05 - mae: 0.0072 - val_loss: 3.1513e-05 - val_mae: 0.0040 - learning_rate: 1.2912e-04
Epoch 38/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.8350e-05 - mae: 0.0070 - val_loss: 3.0209e-05 - val_mae: 0.0039 - learning_rate: 1.2912e-04
Epoch 39/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.3547e-05 - mae: 0.0072 - val_loss: 3.0854e-05 - val_mae: 0.0040 - learning_rate: 6.4560e-05
Epoch 40/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.2400e-05 - mae: 0.0071 - val_loss: 2.9529e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 41/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4073e-05 - mae: 0.0069 - val_loss: 2.9258e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 42/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5838e-05 - mae: 0.0070 - val_loss: 2.9054e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 43/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0313e-05 - mae: 0.0070 - val_loss: 2.9163e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 44/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0101e-05 - mae: 0.0071 - val_loss: 2.8841e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 45/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.8816e-05 - mae: 0.0071 - val_loss: 2.8675e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
Epoch 46/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4251e-05 - mae: 0.0069 - val_loss: 2.8767e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
Epoch 47/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3158e-05 - mae: 0.0069 - val_loss: 2.9648e-05 - val_mae: 0.0038 - learning_rate: 3.2280e-05
Epoch 48/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4270e-05 - mae: 0.0069 - val_loss: 2.8902e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
Epoch 49/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2356e-05 - mae: 0.0068 - val_loss: 2.9181e-05 - val_mae: 0.0038 - learning_rate: 3.2280e-05
Epoch 50/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6547e-05 - mae: 0.0069 - val_loss: 2.8695e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
Epoch 51/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.0234e-05 - mae: 0.0067 - val_loss: 2.9130e-05 - val_mae: 0.0038 - learning_rate: 1.6140e-05
Epoch 52/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3895e-05 - mae: 0.0069 - val_loss: 2.8748e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
Epoch 53/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2657e-05 - mae: 0.0068 - val_loss: 2.9734e-05 - val_mae: 0.0039 - learning_rate: 1.6140e-05
Epoch 54/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9419e-05 - mae: 0.0068 - val_loss: 2.8744e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
Epoch 55/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.0539e-05 - mae: 0.0068 - val_loss: 2.8263e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
Epoch 56/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0298e-05 - mae: 0.0068 - val_loss: 2.9675e-05 - val_mae: 0.0039 - learning_rate: 8.0700e-06
Epoch 57/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4799e-05 - mae: 0.0067 - val_loss: 2.9589e-05 - val_mae: 0.0039 - learning_rate: 8.0700e-06
Epoch 58/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.7056e-05 - mae: 0.0069 - val_loss: 2.8803e-05 - val_mae: 0.0037 - learning_rate: 8.0700e-06
Epoch 59/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3120e-05 - mae: 0.0068 - val_loss: 2.9058e-05 - val_mae: 0.0038 - learning_rate: 8.0700e-06
Epoch 60/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5512e-05 - mae: 0.0069 - val_loss: 2.9056e-05 - val_mae: 0.0038 - learning_rate: 8.0700e-06
Epoch 61/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6107e-05 - mae: 0.0068 - val_loss: 2.9655e-05 - val_mae: 0.0039 - learning_rate: 4.0350e-06
Epoch 62/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.1988e-05 - mae: 0.0068 - val_loss: 2.9478e-05 - val_mae: 0.0039 - learning_rate: 4.0350e-06
Epoch 63/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4365e-05 - mae: 0.0068 - val_loss: 2.9044e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
Epoch 64/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.8052e-05 - mae: 0.0068 - val_loss: 2.9246e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
Epoch 65/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2301e-05 - mae: 0.0068 - val_loss: 2.8845e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
Epoch 66/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3675e-05 - mae: 0.0069 - val_loss: 2.9359e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
Epoch 67/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2341e-05 - mae: 0.0068 - val_loss: 2.8623e-05 - val_mae: 0.0037 - learning_rate: 2.0175e-06
Epoch 68/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3251e-05 - mae: 0.0069 - val_loss: 2.9224e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
Epoch 69/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5100e-05 - mae: 0.0069 - val_loss: 2.8827e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
Epoch 70/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6786e-05 - mae: 0.0068 - val_loss: 2.8537e-05 - val_mae: 0.0037 - learning_rate: 2.0175e-06
Epoch 71/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9285e-05 - mae: 0.0069 - val_loss: 2.8691e-05 - val_mae: 0.0037 - learning_rate: 1.0087e-06
Epoch 72/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2495e-05 - mae: 0.0068 - val_loss: 2.8928e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
Epoch 73/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3063e-05 - mae: 0.0068 - val_loss: 2.8745e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
Epoch 74/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0096e-05 - mae: 0.0069 - val_loss: 2.8655e-05 - val_mae: 0.0037 - learning_rate: 1.0087e-06
Epoch 75/300
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6190e-05 - mae: 0.0069 - val_loss: 2.9064e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
2025-01-31 02:29:45,755 - INFO - Evaluating final LSTM model...
95/95 ━━━━━━━━━━━━━━━━━━━━ 2s 13ms/step
2025-01-31 02:29:47,478 - INFO - Test MSE: 0.0765
2025-01-31 02:29:47,479 - INFO - Test RMSE: 0.2765
2025-01-31 02:29:47,479 - INFO - Test MAE: 0.1770
2025-01-31 02:29:47,479 - INFO - Test R2 Score: 0.9937
2025-01-31 02:29:47,479 - INFO - Directional Accuracy: 0.4823
First 40 Actual vs. Predicted Prices:
+-------+--------------+-------------------+
| Index | Actual Price | Predicted Price |
+-------+--------------+-------------------+
| 0 | 65.26 | 64.37000274658203 |
| 1 | 65.12 | 64.76000213623047 |
| 2 | 65.32 | 64.98999786376953 |
| 3 | 65.29 | 65.0999984741211 |
| 4 | 65.26 | 65.04000091552734 |
| 5 | 65.29 | 65.16000366210938 |
| 6 | 65.26 | 65.19999694824219 |
| 7 | 65.48 | 65.06999969482422 |
| 8 | 65.29 | 65.08999633789062 |
| 9 | 65.25 | 65.04000091552734 |
| 10 | 65.35 | 65.0999984741211 |
| 11 | 65.14 | 65.05000305175781 |
| 12 | 65.2 | 65.0199966430664 |
| 13 | 65.21 | 65.01000213623047 |
| 14 | 65.1 | 64.94000244140625 |
| 15 | 65.45 | 64.87000274658203 |
| 16 | 65.26 | 65.13999938964844 |
| 17 | 65.24 | 65.08999633789062 |
| 18 | 65.43 | 65.12000274658203 |
| 19 | 65.22 | 65.18000030517578 |
| 20 | 65.34 | 65.16999816894531 |
| 21 | 65.13 | 65.20999908447266 |
| 22 | 65.05 | 65.01000213623047 |
| 23 | 64.94 | 65.05000305175781 |
| 24 | 64.94 | 64.91000366210938 |
| 25 | 64.85 | 64.83000183105469 |
| 26 | 64.98 | 64.83000183105469 |
| 27 | 64.93 | 64.80999755859375 |
| 28 | 64.86 | 64.80999755859375 |
| 29 | 64.71 | 64.81999969482422 |
| 30 | 64.89 | 64.56999969482422 |
| 31 | 64.89 | 64.7699966430664 |
| 32 | 64.97 | 64.83000183105469 |
| 33 | 65.03 | 64.79000091552734 |
| 34 | 64.99 | 64.95999908447266 |
| 35 | 64.95 | 64.8499984741211 |
| 36 | 64.89 | 64.88999938964844 |
| 37 | 64.87 | 64.8499984741211 |
| 38 | 64.72 | 64.87000274658203 |
| 39 | 64.63 | 64.70999908447266 |
+-------+--------------+-------------------+
2025-01-31 02:30:07,570 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
2025-01-31 02:30:07,639 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
2025-01-31 02:30:07,640 - INFO - Starting DQN hyperparameter tuning with Optuna using 54 parallel trials...
[I 2025-01-31 02:30:07,640] A new study created in memory with name: no-name-7f6e13ed-f0e1-4c91-bfa6-ff8fbfdd7d46
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
Traceback (most recent call last):
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: main thread is not in main loop
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
Traceback (most recent call last):
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: main thread is not in main loop
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
Traceback (most recent call last):
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: main thread is not in main loop
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
Traceback (most recent call last):
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: main thread is not in main loop
Exception ignored in: <function Image.__del__ at 0x79920e1fc9a0>
Traceback (most recent call last):
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 4082, in __del__
self.tk.call('image', 'delete', self.name)
RuntimeError: main thread is not in main loop
Tcl_AsyncDelete: async handler deleted by the wrong thread
zsh: IOT instruction (core dumped) python3 LSTMDQN.py BAT.csv

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 92 KiB

View File

@@ -1,24 +0,0 @@
2025-02-01 15:31:11,610 - INFO - Agent achieved final net worth: $10000.00
2025-02-01 15:31:11,611 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-01 15:31:11,611 - INFO - Training DQN agent: Attempt 9 with hyperparameters: {'lr': 0.00043046721, 'gamma': 0.95, 'exploration_fraction': 0.25999999999999995, 'buffer_size': 10000, 'batch_size': 64}
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
Using cpu device
2025-02-01 17:06:19,852 - INFO - Agent achieved final net worth: $10000.00
2025-02-01 17:06:19,853 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-01 17:06:19,853 - INFO - Training DQN agent: Attempt 10 with hyperparameters: {'lr': 0.000387420489, 'gamma': 0.95, 'exploration_fraction': 0.27999999999999997, 'buffer_size': 10000, 'batch_size': 64}
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
Using cpu device
2025-02-01 18:41:36,874 - INFO - Agent achieved final net worth: $10000.00
2025-02-01 18:41:36,874 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
2025-02-01 18:41:36,875 - WARNING - Failed to train a satisfactory DQN agent after multiple attempts.
2025-02-01 18:41:36,875 - INFO - Running final inference with the trained DQN model...
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 869, in <module>
main()
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 795, in main
action, _ = best_agent.predict(obs, deterministic=True)
^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'predict'

File diff suppressed because one or more lines are too long

View File

@@ -1,347 +0,0 @@
(venv) kleinpanic@kleinpanic:~/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src$ py LSTMDQN.py BAT.csv
2025-01-31 22:41:37.524313: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1738363297.545380 3148462 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1738363297.551750 3148462 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-31 22:41:37.573675: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-31 22:41:43,005 - INFO - ===== Resource Statistics =====
2025-01-31 22:41:43,005 - INFO - Physical CPU Cores: 28
2025-01-31 22:41:43,005 - INFO - Logical CPU Cores: 56
2025-01-31 22:41:43,005 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
2025-01-31 22:41:43,005 - INFO - No GPUs detected.
2025-01-31 22:41:43,005 - INFO - =================================
2025-01-31 22:41:43,006 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
2025-01-31 22:41:43,006 - INFO - Loading data from: BAT.csv
2025-01-31 22:41:44,326 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
2025-01-31 22:41:44,339 - INFO - Data loaded and sorted successfully.
2025-01-31 22:41:44,339 - INFO - Calculating technical indicators...
2025-01-31 22:41:44,370 - INFO - Technical indicators calculated successfully.
2025-01-31 22:41:44,379 - INFO - Starting parallel feature engineering with 54 workers...
2025-01-31 22:41:53,902 - INFO - Parallel feature engineering completed.
2025-01-31 22:41:54,028 - INFO - Scaled training features shape: (14134, 15, 17)
2025-01-31 22:41:54,028 - INFO - Scaled validation features shape: (3028, 15, 17)
2025-01-31 22:41:54,028 - INFO - Scaled testing features shape: (3030, 15, 17)
2025-01-31 22:41:54,028 - INFO - Scaled training target shape: (14134,)
2025-01-31 22:41:54,028 - INFO - Scaled validation target shape: (3028,)
2025-01-31 22:41:54,029 - INFO - Scaled testing target shape: (3030,)
2025-01-31 22:41:54,029 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
[I 2025-01-31 22:41:54,029] A new study created in memory with name: no-name-58aeb7f7-b8be-4643-9d01-0d7bcf35db2e
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:370: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)
[W 2025-01-31 22:41:54,037] Trial 0 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.3458004047482393, 'learning_rate': 0.00032571516657639116, 'optimizer': 'Adam', 'decay': 5.1271378208025266e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,040] Trial 1 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.41366725075244426, 'learning_rate': 1.4215518116455374e-05, 'optimizer': 'Adam', 'decay': 2.4425472693131955e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,041] Trial 0 failed with value None.
[W 2025-01-31 22:41:54,044] Trial 2 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.4338960746358078, 'learning_rate': 0.0008904040106011442, 'optimizer': 'Nadam', 'decay': 5.346913345250019e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,045] Trial 1 failed with value None.
[W 2025-01-31 22:41:54,048] Trial 3 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.12636442800548273, 'learning_rate': 0.00021216094172774624, 'optimizer': 'Adam', 'decay': 6.289573710217091e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,051] Trial 4 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.4118163224442708, 'learning_rate': 0.0001753425558060621, 'optimizer': 'Nadam', 'decay': 1.0106893106530013e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,053] Trial 5 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.22600776619683294, 'learning_rate': 4.6020052773101484e-05, 'optimizer': 'Nadam', 'decay': 1.401502701741485e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,054] Trial 2 failed with value None.
[W 2025-01-31 22:41:54,059] Trial 6 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.49745444543788064, 'learning_rate': 0.004560559624417403, 'optimizer': 'Adam', 'decay': 9.80562105055051e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,060] Trial 3 failed with value None.
[W 2025-01-31 22:41:54,064] Trial 7 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.11175568582439271, 'learning_rate': 0.000970072556392495, 'optimizer': 'Adam', 'decay': 5.792236253956584e-06} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,065] Trial 4 failed with value None.
[W 2025-01-31 22:41:54,069] Trial 8 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.4128314285072633, 'learning_rate': 0.000545928656752339, 'optimizer': 'Adam', 'decay': 8.349182110406793e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,095] Trial 8 failed with value None.
[W 2025-01-31 22:41:54,073] Trial 5 failed with value None.
[W 2025-01-31 22:41:54,076] Trial 10 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.312090359026424, 'learning_rate': 0.004334434878981849, 'optimizer': 'Nadam', 'decay': 8.946685227991797e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,078] Trial 11 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.3176109191721788, 'learning_rate': 0.0010138486071155559, 'optimizer': 'Nadam', 'decay': 2.864596673239629e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,084] Trial 6 failed with value None.
[W 2025-01-31 22:41:54,084] Trial 12 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.23624224169024638, 'learning_rate': 0.0007065434808473306, 'optimizer': 'Adam', 'decay': 1.6045047417478787e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,088] Trial 7 failed with value None.
[W 2025-01-31 22:41:54,072] Trial 9 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.32982534569008337, 'learning_rate': 0.00044815992336546054, 'optimizer': 'Nadam', 'decay': 1.2045464023339681e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,097] Trial 10 failed with value None.
[W 2025-01-31 22:41:54,101] Trial 11 failed with value None.
[W 2025-01-31 22:41:54,104] Trial 12 failed with value None.
[W 2025-01-31 22:41:54,108] Trial 9 failed with value None.
[W 2025-01-31 22:41:54,126] Trial 13 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.4314674109696518, 'learning_rate': 0.00020500811974021594, 'optimizer': 'Nadam', 'decay': 9.329438318207097e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,126] Trial 13 failed with value None.
[W 2025-01-31 22:41:54,137] Trial 14 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.45933740233556053, 'learning_rate': 0.0016981825407295947, 'optimizer': 'Nadam', 'decay': 3.7526439477629106e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,138] Trial 14 failed with value None.
[W 2025-01-31 22:41:54,139] Trial 15 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.13179726561423677, 'learning_rate': 0.009702870830616994, 'optimizer': 'Nadam', 'decay': 1.5717160470745384e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,140] Trial 15 failed with value None.
[W 2025-01-31 22:41:54,142] Trial 16 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.1184952725205303, 'learning_rate': 0.0002901212127436873, 'optimizer': 'Adam', 'decay': 1.2671796687995818e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,143] Trial 16 failed with value None.
[W 2025-01-31 22:41:54,145] Trial 17 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.3911357548507932, 'learning_rate': 2.1174519659994443e-05, 'optimizer': 'Adam', 'decay': 7.113124525281298e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,146] Trial 18 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.194308829860494, 'learning_rate': 2.3684641389781485e-05, 'optimizer': 'Nadam', 'decay': 2.1823222065039084e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,146] Trial 17 failed with value None.
[W 2025-01-31 22:41:54,147] Trial 19 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.34952903992289974, 'learning_rate': 0.0001649975428188158, 'optimizer': 'Nadam', 'decay': 8.961070238582916e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,148] Trial 18 failed with value None.
[W 2025-01-31 22:41:54,150] Trial 19 failed with value None.
[W 2025-01-31 22:41:54,151] Trial 20 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.24862299600787863, 'learning_rate': 3.160302043940613e-05, 'optimizer': 'Nadam', 'decay': 4.432627646713297e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,152] Trial 22 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.24247452680935244, 'learning_rate': 0.009143026717679506, 'optimizer': 'Nadam', 'decay': 3.8695560131185495e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,154] Trial 23 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.27974565379013505, 'learning_rate': 0.0005552121580002416, 'optimizer': 'Adam', 'decay': 6.460942114176827e-06} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,155] Trial 20 failed with value None.
[W 2025-01-31 22:41:54,155] Trial 21 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.31566223075768207, 'learning_rate': 0.00013277190404539305, 'optimizer': 'Nadam', 'decay': 5.448184988496794e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,156] Trial 22 failed with value None.
[W 2025-01-31 22:41:54,157] Trial 24 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.20684570701871122, 'learning_rate': 2.02919005955524e-05, 'optimizer': 'Nadam', 'decay': 6.367297091468678e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,158] Trial 23 failed with value None.
[W 2025-01-31 22:41:54,158] Trial 25 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.14749229469818195, 'learning_rate': 1.6074589705354466e-05, 'optimizer': 'Nadam', 'decay': 2.9293835054420393e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,161] Trial 26 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.38879633341946584, 'learning_rate': 2.5036537142341482e-05, 'optimizer': 'Nadam', 'decay': 4.8346386929100394e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,161] Trial 21 failed with value None.
[W 2025-01-31 22:41:54,161] Trial 27 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.4311830196294676, 'learning_rate': 6.15743775325322e-05, 'optimizer': 'Adam', 'decay': 2.5290071255921133e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,162] Trial 28 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.14813081091496075, 'learning_rate': 0.0017948222377220397, 'optimizer': 'Adam', 'decay': 9.679895886200194e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,163] Trial 24 failed with value None.
[W 2025-01-31 22:41:54,164] Trial 29 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.4827525644514289, 'learning_rate': 0.000583829520138558, 'optimizer': 'Adam', 'decay': 3.9540551700479366e-05} because of the following error: NameError("name 'X_train' is not defined").
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
^^^^^^^
NameError: name 'X_train' is not defined
[W 2025-01-31 22:41:54,165] Trial 25 failed with value None.
[W 2025-01-31 22:41:54,166] Trial 26 failed with value None.
[W 2025-01-31 22:41:54,167] Trial 27 failed with value None.
[W 2025-01-31 22:41:54,168] Trial 28 failed with value None.
[W 2025-01-31 22:41:54,169] Trial 29 failed with value None.
Traceback (most recent call last):
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 897, in <module>
main()
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 685, in main
best_lstm_params = study_lstm.best_params
^^^^^^^^^^^^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/study.py", line 119, in best_params
return self.best_trial.params
^^^^^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/study.py", line 162, in best_trial
best_trial = self._storage.get_best_trial(self._study_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/storages/_in_memory.py", line 249, in get_best_trial
raise ValueError("No trials are completed yet.")
ValueError: No trials are completed yet.

View File

@@ -242,7 +242,7 @@ def main():
vec_env = DummyVecEnv([lambda: raw_env]) vec_env = DummyVecEnv([lambda: raw_env])
# 4) Load your DQN model # 4) Load your DQN model
model = DQN.load("dqn_stock_trading.zip", env=vec_env) model = DQN.load("best_dqn_model_lstm.zip", env=vec_env)
# 5) Run inference # 5) Run inference
obs = vec_env.reset() obs = vec_env.reset()