Merge branch 'main' of github.com:MidasTechnologiesLLC/MidasTechnologies
Adding commit
This commit is contained in:
@@ -40,3 +40,255 @@
|
|||||||
2025-01-30 00:56:56,285 - INFO - Scaled validation target shape: (3028,)
|
2025-01-30 00:56:56,285 - INFO - Scaled validation target shape: (3028,)
|
||||||
2025-01-30 00:56:56,285 - INFO - Scaled testing target shape: (3030,)
|
2025-01-30 00:56:56,285 - INFO - Scaled testing target shape: (3030,)
|
||||||
2025-01-30 00:56:56,285 - INFO - Starting LSTM hyperparameter optimization with Optuna using 10 parallel trials...
|
2025-01-30 00:56:56,285 - INFO - Starting LSTM hyperparameter optimization with Optuna using 10 parallel trials...
|
||||||
|
2025-01-30 05:59:42,913 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-30 05:59:42,913 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-30 05:59:42,913 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-30 05:59:42,913 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-30 05:59:42,913 - INFO - No GPUs detected.
|
||||||
|
2025-01-30 05:59:42,913 - INFO - =================================
|
||||||
|
2025-01-30 05:59:42,917 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-30 05:59:42,917 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-30 05:59:44,354 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-30 05:59:44,373 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-30 05:59:44,373 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-30 05:59:44,437 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-30 05:59:44,447 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-30 05:59:53,981 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-30 05:59:54,103 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-01-30 07:24:16,261 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.2601904776023419, 'learning_rate': 0.0029898637649075103, 'optimizer': 'Nadam', 'decay': 6.765206579726146e-06}
|
||||||
|
2025-01-30 07:24:16,587 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-01-30 07:54:53,021 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-01-30 07:54:54,677 - INFO - Test MSE: 0.0892
|
||||||
|
2025-01-30 07:54:54,678 - INFO - Test RMSE: 0.2986
|
||||||
|
2025-01-30 07:54:54,678 - INFO - Test MAE: 0.1908
|
||||||
|
2025-01-30 07:54:54,678 - INFO - Test R2 Score: 0.9926
|
||||||
|
2025-01-30 07:54:54,678 - INFO - Directional Accuracy: 0.4675
|
||||||
|
2025-01-30 07:55:15,118 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-01-30 07:55:15,226 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-01-30 07:55:15,226 - INFO - Starting DQN hyperparameter tuning with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 00:33:51,246 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 00:33:51,246 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 00:33:51,246 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 00:33:51,246 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 00:33:51,246 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 00:33:51,247 - INFO - =================================
|
||||||
|
2025-01-31 00:33:51,247 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 00:33:51,247 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 00:33:52,623 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 00:33:52,640 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 00:33:52,640 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 00:33:52,680 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 00:33:52,690 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 00:34:02,440 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 02:07:22,583 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18042015532719258, 'learning_rate': 0.008263668593877975, 'optimizer': 'Nadam', 'decay': 7.065697336348234e-05}
|
||||||
|
2025-01-31 02:07:22,887 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-01-31 02:29:45,755 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-01-31 02:29:47,478 - INFO - Test MSE: 0.0765
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test RMSE: 0.2765
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test MAE: 0.1770
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test R2 Score: 0.9937
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Directional Accuracy: 0.4823
|
||||||
|
2025-01-31 02:30:07,570 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-01-31 02:30:07,639 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-01-31 02:30:07,640 - INFO - Starting DQN hyperparameter tuning with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 02:49:01,996 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 02:49:01,996 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 02:49:01,996 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 02:49:01,996 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 02:49:01,996 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 02:49:01,996 - INFO - =================================
|
||||||
|
2025-01-31 02:49:01,997 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 02:49:01,997 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 02:49:03,423 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 02:49:03,440 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 02:49:03,440 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 02:49:03,479 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 02:49:03,489 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 02:49:12,566 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 02:49:12,661 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 02:49:12,662 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 04:26:41,571 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.22440715941937378, 'learning_rate': 0.003707877783335322, 'optimizer': 'Adam', 'decay': 8.301665110041122e-05}
|
||||||
|
2025-01-31 04:26:41,960 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-01-31 04:51:19,731 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-01-31 04:51:21,389 - INFO - Test MSE: 0.0877
|
||||||
|
2025-01-31 04:51:21,389 - INFO - Test RMSE: 0.2962
|
||||||
|
2025-01-31 04:51:21,390 - INFO - Test MAE: 0.1893
|
||||||
|
2025-01-31 04:51:21,390 - INFO - Test R2 Score: 0.9927
|
||||||
|
2025-01-31 04:51:21,390 - INFO - Directional Accuracy: 0.4764
|
||||||
|
2025-01-31 04:51:21,734 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-01-31 04:51:21,768 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-01-31 04:51:21,768 - INFO - Starting DQN hyperparameter tuning with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 22:41:43,005 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 22:41:43,005 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 22:41:43,005 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 22:41:43,005 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 22:41:43,005 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 22:41:43,005 - INFO - =================================
|
||||||
|
2025-01-31 22:41:43,006 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 22:41:43,006 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 22:41:44,326 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 22:41:44,339 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 22:41:44,339 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 22:41:44,370 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 22:41:44,379 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 22:41:53,902 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 22:41:54,029 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 22:41:54,029 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-01-31 22:50:02,369 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 22:50:02,369 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 22:50:02,369 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 22:50:02,369 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 22:50:02,369 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 22:50:02,369 - INFO - =================================
|
||||||
|
2025-01-31 22:50:02,370 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 22:50:02,370 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 22:50:03,713 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 22:50:03,730 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 22:50:03,730 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 22:50:03,769 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 22:50:03,779 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 22:50:13,311 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 22:50:13,420 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 22:50:13,421 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-02-01 00:14:29,571 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.22253761543698394, 'learning_rate': 0.008212670428960327, 'optimizer': 'Adam', 'decay': 7.743037774398402e-05}
|
||||||
|
2025-02-01 00:14:29,996 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-02-01 00:30:08,485 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-02-01 00:30:09,683 - INFO - Test MSE: 0.1002
|
||||||
|
2025-02-01 00:30:09,683 - INFO - Test RMSE: 0.3165
|
||||||
|
2025-02-01 00:30:09,683 - INFO - Test MAE: 0.2066
|
||||||
|
2025-02-01 00:30:09,683 - INFO - Test R2 Score: 0.9917
|
||||||
|
2025-02-01 00:30:09,683 - INFO - Directional Accuracy: 0.4784
|
||||||
|
2025-02-01 00:30:09,986 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-02-01 00:30:10,009 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-02-01 00:30:10,009 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 01:19:25,680 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-02-01 01:19:25,680 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-02-01 01:19:25,680 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-02-01 01:19:25,681 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-02-01 01:19:25,681 - INFO - No GPUs detected.
|
||||||
|
2025-02-01 01:19:25,681 - INFO - =================================
|
||||||
|
2025-02-01 01:19:25,681 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-02-01 01:19:25,681 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-02-01 01:19:26,678 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-02-01 01:19:26,695 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-02-01 01:19:26,695 - INFO - Calculating technical indicators...
|
||||||
|
2025-02-01 01:19:26,732 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-02-01 01:19:26,742 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-02-01 01:19:35,239 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-02-01 01:19:35,331 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-02-01 01:19:35,331 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-02-01 01:19:35,331 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-02-01 01:19:35,331 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-02-01 01:19:35,332 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-02-01 01:19:35,332 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-02-01 01:19:35,332 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-02-01 02:37:58,098 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.1281900339578652, 'learning_rate': 0.008792091963204256, 'optimizer': 'Nadam', 'decay': 3.5015323637201504e-06}
|
||||||
|
2025-02-01 02:37:58,462 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-02-01 02:54:38,646 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-02-01 02:54:39,805 - INFO - Test MSE: 0.0718
|
||||||
|
2025-02-01 02:54:39,805 - INFO - Test RMSE: 0.2680
|
||||||
|
2025-02-01 02:54:39,805 - INFO - Test MAE: 0.1710
|
||||||
|
2025-02-01 02:54:39,805 - INFO - Test R2 Score: 0.9941
|
||||||
|
2025-02-01 02:54:39,805 - INFO - Directional Accuracy: 0.4810
|
||||||
|
2025-02-01 02:54:40,118 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-02-01 02:54:40,145 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-02-01 02:54:40,145 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 04:26:48,913 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 04:26:48,913 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 04:26:48,913 - INFO - Training DQN agent: Attempt 2 with hyperparameters: {'lr': 0.0009000000000000001, 'gamma': 0.95, 'exploration_fraction': 0.12000000000000001, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 06:01:15,175 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 06:01:15,176 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 06:01:15,176 - INFO - Training DQN agent: Attempt 3 with hyperparameters: {'lr': 0.0008100000000000001, 'gamma': 0.95, 'exploration_fraction': 0.14, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 07:35:48,244 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 07:35:48,244 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 07:35:48,244 - INFO - Training DQN agent: Attempt 4 with hyperparameters: {'lr': 0.000729, 'gamma': 0.95, 'exploration_fraction': 0.16, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 09:10:48,457 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 09:10:48,458 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 09:10:48,458 - INFO - Training DQN agent: Attempt 5 with hyperparameters: {'lr': 0.0006561000000000001, 'gamma': 0.95, 'exploration_fraction': 0.18, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 10:45:37,862 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 10:45:37,862 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 10:45:37,862 - INFO - Training DQN agent: Attempt 6 with hyperparameters: {'lr': 0.00059049, 'gamma': 0.95, 'exploration_fraction': 0.19999999999999998, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 12:20:51,667 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 12:20:51,668 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 12:20:51,668 - INFO - Training DQN agent: Attempt 7 with hyperparameters: {'lr': 0.000531441, 'gamma': 0.95, 'exploration_fraction': 0.21999999999999997, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 13:55:52,555 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 13:55:52,556 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 13:55:52,556 - INFO - Training DQN agent: Attempt 8 with hyperparameters: {'lr': 0.0004782969, 'gamma': 0.95, 'exploration_fraction': 0.23999999999999996, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 15:31:11,610 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 15:31:11,611 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 15:31:11,611 - INFO - Training DQN agent: Attempt 9 with hyperparameters: {'lr': 0.00043046721, 'gamma': 0.95, 'exploration_fraction': 0.25999999999999995, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 17:06:19,852 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 17:06:19,853 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 17:06:19,853 - INFO - Training DQN agent: Attempt 10 with hyperparameters: {'lr': 0.000387420489, 'gamma': 0.95, 'exploration_fraction': 0.27999999999999997, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 18:41:36,874 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 18:41:36,874 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 18:41:36,875 - WARNING - Failed to train a satisfactory DQN agent after multiple attempts.
|
||||||
|
2025-02-01 18:41:36,875 - INFO - Running final inference with the trained DQN model...
|
||||||
|
2025-02-01 20:30:14,186 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-02-01 20:30:14,186 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-02-01 20:30:14,186 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-02-01 20:30:14,187 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-02-01 20:30:14,187 - INFO - No GPUs detected.
|
||||||
|
2025-02-01 20:30:14,187 - INFO - =================================
|
||||||
|
2025-02-01 20:30:14,187 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-02-01 20:30:14,187 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-02-01 20:30:15,845 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-02-01 20:30:15,862 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-02-01 20:30:15,862 - INFO - Calculating technical indicators...
|
||||||
|
2025-02-01 20:30:15,901 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-02-01 20:30:15,912 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-02-01 20:30:25,135 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-02-01 20:30:25,228 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
2025-02-01 21:56:20,956 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18495715637312699, 'learning_rate': 0.0033267819284254043, 'optimizer': 'Nadam', 'decay': 8.705868913987463e-05}
|
||||||
|
2025-02-01 21:56:21,377 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
2025-02-01 22:27:53,128 - INFO - Evaluating final LSTM model...
|
||||||
|
2025-02-01 22:27:55,211 - INFO - Test MSE: 0.0798
|
||||||
|
2025-02-01 22:27:55,211 - INFO - Test RMSE: 0.2824
|
||||||
|
2025-02-01 22:27:55,211 - INFO - Test MAE: 0.1797
|
||||||
|
2025-02-01 22:27:55,211 - INFO - Test R2 Score: 0.9934
|
||||||
|
2025-02-01 22:27:55,211 - INFO - Directional Accuracy: 0.4721
|
||||||
|
2025-02-01 22:27:55,573 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-02-01 22:27:55,606 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-02-01 22:27:55,606 - INFO - Training DQN agent: Attempt 1 with hyperparameters: {'lr': 0.001, 'gamma': 0.95, 'exploration_fraction': 0.1, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-01 23:59:48,100 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 23:59:48,100 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 23:59:48,100 - INFO - Training DQN agent: Attempt 2 with hyperparameters: {'lr': 0.0009000000000000001, 'gamma': 0.95, 'exploration_fraction': 0.12000000000000001, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
2025-02-02 01:34:10,564 - INFO - Agent achieved final net worth: $10999.26
|
||||||
|
2025-02-02 01:34:10,564 - INFO - Agent meets performance criteria!
|
||||||
|
2025-02-02 01:34:10,572 - INFO - Final DQN agent trained and saved.
|
||||||
|
2025-02-02 01:34:10,572 - INFO - Running final inference with the trained DQN model...
|
||||||
|
2025-02-02 02:00:51,337 - INFO - Final inference completed. Results logged and displayed.
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ import numpy as np
|
|||||||
import pandas as pd
|
import pandas as pd
|
||||||
import logging
|
import logging
|
||||||
from tabulate import tabulate
|
from tabulate import tabulate
|
||||||
|
import matplotlib
|
||||||
|
matplotlib.use("Agg")
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
import seaborn as sns
|
import seaborn as sns
|
||||||
import psutil
|
import psutil
|
||||||
@@ -37,16 +39,15 @@ import time
|
|||||||
# Suppress TensorFlow logs beyond errors
|
# Suppress TensorFlow logs beyond errors
|
||||||
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
|
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# GLOBAL LOCK FOR DQN TRAINING (to force one-at-a-time usage of the shared LSTM)
|
||||||
|
# =============================================================================
|
||||||
|
dqn_lock = threading.Lock()
|
||||||
|
|
||||||
# ============================
|
# ============================
|
||||||
# Resource Detection Functions
|
# Resource Detection Functions
|
||||||
# ============================
|
# ============================
|
||||||
def get_cpu_info():
|
def get_cpu_info():
|
||||||
"""
|
|
||||||
Retrieves CPU information including physical and logical cores and current usage per core.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
dict: Dictionary containing physical cores, logical cores, and CPU usage per core.
|
|
||||||
"""
|
|
||||||
cpu_count = psutil.cpu_count(logical=False) # Physical cores
|
cpu_count = psutil.cpu_count(logical=False) # Physical cores
|
||||||
cpu_count_logical = psutil.cpu_count(logical=True) # Logical cores
|
cpu_count_logical = psutil.cpu_count(logical=True) # Logical cores
|
||||||
cpu_percent = psutil.cpu_percent(interval=1, percpu=True)
|
cpu_percent = psutil.cpu_percent(interval=1, percpu=True)
|
||||||
@@ -57,12 +58,6 @@ def get_cpu_info():
|
|||||||
}
|
}
|
||||||
|
|
||||||
def get_gpu_info():
|
def get_gpu_info():
|
||||||
"""
|
|
||||||
Retrieves GPU information including load, memory usage, and temperature.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
list: List of dictionaries containing GPU stats.
|
|
||||||
"""
|
|
||||||
gpus = GPUtil.getGPUs()
|
gpus = GPUtil.getGPUs()
|
||||||
gpu_info = []
|
gpu_info = []
|
||||||
for gpu in gpus:
|
for gpu in gpus:
|
||||||
@@ -78,13 +73,6 @@ def get_gpu_info():
|
|||||||
return gpu_info
|
return gpu_info
|
||||||
|
|
||||||
def configure_tensorflow(cpu_stats, gpu_stats):
|
def configure_tensorflow(cpu_stats, gpu_stats):
|
||||||
"""
|
|
||||||
Configures TensorFlow to utilize available CPU and GPU resources efficiently.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
cpu_stats (dict): Dictionary containing CPU statistics.
|
|
||||||
gpu_stats (list): List of dictionaries containing GPU statistics.
|
|
||||||
"""
|
|
||||||
logical_cores = cpu_stats['logical_cores']
|
logical_cores = cpu_stats['logical_cores']
|
||||||
os.environ["OMP_NUM_THREADS"] = str(logical_cores)
|
os.environ["OMP_NUM_THREADS"] = str(logical_cores)
|
||||||
os.environ["TF_NUM_INTRAOP_THREADS"] = str(logical_cores)
|
os.environ["TF_NUM_INTRAOP_THREADS"] = str(logical_cores)
|
||||||
@@ -108,12 +96,6 @@ def configure_tensorflow(cpu_stats, gpu_stats):
|
|||||||
# Resource Monitoring Function (Optional)
|
# Resource Monitoring Function (Optional)
|
||||||
# ============================
|
# ============================
|
||||||
def monitor_resources(interval=60):
|
def monitor_resources(interval=60):
|
||||||
"""
|
|
||||||
Continuously monitors and logs CPU and GPU usage at specified intervals.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
interval (int): Time in seconds between each monitoring snapshot.
|
|
||||||
"""
|
|
||||||
while True:
|
while True:
|
||||||
cpu = psutil.cpu_percent(interval=1, percpu=True)
|
cpu = psutil.cpu_percent(interval=1, percpu=True)
|
||||||
gpu = get_gpu_info()
|
gpu = get_gpu_info()
|
||||||
@@ -153,7 +135,6 @@ def load_data(file_path):
|
|||||||
'close': 'Close'
|
'close': 'Close'
|
||||||
}
|
}
|
||||||
df.rename(columns=rename_mapping, inplace=True)
|
df.rename(columns=rename_mapping, inplace=True)
|
||||||
|
|
||||||
logging.info(f"Data columns after renaming: {df.columns.tolist()}")
|
logging.info(f"Data columns after renaming: {df.columns.tolist()}")
|
||||||
df.sort_values('Date', inplace=True)
|
df.sort_values('Date', inplace=True)
|
||||||
df.reset_index(drop=True, inplace=True)
|
df.reset_index(drop=True, inplace=True)
|
||||||
@@ -245,10 +226,11 @@ def parse_arguments():
|
|||||||
help='Number of episodes to evaluate DQN in the tuning step. Default=1 (entire dataset once).')
|
help='Number of episodes to evaluate DQN in the tuning step. Default=1 (entire dataset once).')
|
||||||
parser.add_argument('--n_trials_lstm', type=int, default=30,
|
parser.add_argument('--n_trials_lstm', type=int, default=30,
|
||||||
help='Number of Optuna trials for LSTM. Default=30.')
|
help='Number of Optuna trials for LSTM. Default=30.')
|
||||||
|
# The following arguments are no longer used in sequential DQN training:
|
||||||
parser.add_argument('--n_trials_dqn', type=int, default=20,
|
parser.add_argument('--n_trials_dqn', type=int, default=20,
|
||||||
help='Number of Optuna trials for DQN. Default=20.')
|
help='(Unused in sequential DQN training)')
|
||||||
parser.add_argument('--max_parallel_trials', type=int, default=None,
|
parser.add_argument('--max_parallel_trials', type=int, default=None,
|
||||||
help='Maximum number of parallel Optuna trials. Defaults to (logical cores - 2).')
|
help='(Unused in sequential DQN training)')
|
||||||
parser.add_argument('--preprocess_workers', type=int, default=None,
|
parser.add_argument('--preprocess_workers', type=int, default=None,
|
||||||
help='Number of worker processes for data preprocessing. Defaults to (logical cores - 2).')
|
help='Number of worker processes for data preprocessing. Defaults to (logical cores - 2).')
|
||||||
parser.add_argument('--monitor_resources', action='store_true',
|
parser.add_argument('--monitor_resources', action='store_true',
|
||||||
@@ -274,7 +256,6 @@ class ActionLoggingCallback(BaseCallback):
|
|||||||
self.reward_buffer = []
|
self.reward_buffer = []
|
||||||
|
|
||||||
def _on_step(self):
|
def _on_step(self):
|
||||||
# For Stable Baselines3, access actions and rewards via self.locals
|
|
||||||
action = self.locals.get('action', None)
|
action = self.locals.get('action', None)
|
||||||
reward = self.locals.get('reward', None)
|
reward = self.locals.get('reward', None)
|
||||||
if action is not None:
|
if action is not None:
|
||||||
@@ -305,10 +286,8 @@ class ActionLoggingCallback(BaseCallback):
|
|||||||
def parallel_feature_engineering(row):
|
def parallel_feature_engineering(row):
|
||||||
"""
|
"""
|
||||||
Placeholder function for feature engineering. Modify as needed.
|
Placeholder function for feature engineering. Modify as needed.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
row (pd.Series): A row from the DataFrame.
|
row (pd.Series): A row from the DataFrame.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
pd.Series: Processed row.
|
pd.Series: Processed row.
|
||||||
"""
|
"""
|
||||||
@@ -318,11 +297,9 @@ def parallel_feature_engineering(row):
|
|||||||
def feature_engineering_parallel(df, num_workers):
|
def feature_engineering_parallel(df, num_workers):
|
||||||
"""
|
"""
|
||||||
Applies feature engineering in parallel using multiprocessing.
|
Applies feature engineering in parallel using multiprocessing.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
df (pd.DataFrame): DataFrame to process.
|
df (pd.DataFrame): DataFrame to process.
|
||||||
num_workers (int): Number of worker processes.
|
num_workers (int): Number of worker processes.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
pd.DataFrame: Processed DataFrame.
|
pd.DataFrame: Processed DataFrame.
|
||||||
"""
|
"""
|
||||||
@@ -334,7 +311,208 @@ def feature_engineering_parallel(df, num_workers):
|
|||||||
return df_processed
|
return df_processed
|
||||||
|
|
||||||
# ============================
|
# ============================
|
||||||
# Main Function with Enhanced Optimizations
|
# LSTM Model Construction & Training (Including Optuna Tuning)
|
||||||
|
# ============================
|
||||||
|
def build_lstm(input_shape, hyperparams):
|
||||||
|
model = Sequential()
|
||||||
|
num_layers = hyperparams['num_lstm_layers']
|
||||||
|
units = hyperparams['lstm_units']
|
||||||
|
drop = hyperparams['dropout_rate']
|
||||||
|
for i in range(num_layers):
|
||||||
|
return_seqs = (i < num_layers - 1)
|
||||||
|
model.add(Bidirectional(
|
||||||
|
LSTM(units, return_sequences=return_seqs, kernel_regularizer=l2(1e-4)),
|
||||||
|
input_shape=input_shape if i == 0 else None
|
||||||
|
))
|
||||||
|
model.add(Dropout(drop))
|
||||||
|
model.add(Dense(1, activation='linear'))
|
||||||
|
|
||||||
|
opt_name = hyperparams['optimizer']
|
||||||
|
lr = hyperparams['learning_rate']
|
||||||
|
decay = hyperparams['decay']
|
||||||
|
if opt_name == 'Adam':
|
||||||
|
opt = Adam(learning_rate=lr, decay=decay)
|
||||||
|
elif opt_name == 'Nadam':
|
||||||
|
opt = Nadam(learning_rate=lr)
|
||||||
|
else:
|
||||||
|
opt = Adam(learning_rate=lr)
|
||||||
|
|
||||||
|
model.compile(loss=Huber(), optimizer=opt, metrics=['mae'])
|
||||||
|
return model
|
||||||
|
|
||||||
|
# NOTE: The following lstm_objective is now defined as an inner function in main,
|
||||||
|
# so that it can access X_train, y_train, X_val, y_val.
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Custom Gym Environment with LSTM Predictions
|
||||||
|
# ============================
|
||||||
|
class StockTradingEnvWithLSTM(gym.Env):
|
||||||
|
"""
|
||||||
|
A custom OpenAI Gym environment for stock trading that integrates LSTM model predictions.
|
||||||
|
Observation includes technical indicators, account information, and predicted next close price.
|
||||||
|
"""
|
||||||
|
metadata = {'render.modes': ['human']}
|
||||||
|
|
||||||
|
def __init__(self, df, feature_columns, lstm_model, scaler_features, scaler_target,
|
||||||
|
window_size=15, initial_balance=10000, transaction_cost=0.001):
|
||||||
|
super(StockTradingEnvWithLSTM, self).__init__()
|
||||||
|
self.df = df.reset_index(drop=True)
|
||||||
|
self.feature_columns = feature_columns
|
||||||
|
self.lstm_model = lstm_model
|
||||||
|
self.scaler_features = scaler_features
|
||||||
|
self.scaler_target = scaler_target
|
||||||
|
self.window_size = window_size
|
||||||
|
|
||||||
|
self.initial_balance = initial_balance
|
||||||
|
self.balance = initial_balance
|
||||||
|
self.net_worth = initial_balance
|
||||||
|
self.transaction_cost = transaction_cost
|
||||||
|
|
||||||
|
self.max_steps = len(df)
|
||||||
|
self.current_step = 0
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
|
||||||
|
# Raw array of features
|
||||||
|
self.raw_features = df[feature_columns].values
|
||||||
|
|
||||||
|
# Action space: 0=Sell, 1=Hold, 2=Buy
|
||||||
|
self.action_space = spaces.Discrete(3)
|
||||||
|
|
||||||
|
# Observation space: [technical indicators, balance, shares, cost_basis, predicted_next_close]
|
||||||
|
self.observation_space = spaces.Box(
|
||||||
|
low=0, high=1,
|
||||||
|
shape=(len(feature_columns) + 3 + 1,),
|
||||||
|
dtype=np.float32
|
||||||
|
)
|
||||||
|
# Forced lock for LSTM predictions
|
||||||
|
self.lstm_lock = threading.Lock()
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.balance = self.initial_balance
|
||||||
|
self.net_worth = self.initial_balance
|
||||||
|
self.current_step = 0
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
return self._get_obs()
|
||||||
|
|
||||||
|
def _get_obs(self):
|
||||||
|
row = self.raw_features[self.current_step]
|
||||||
|
row_max = np.max(row) if np.max(row) != 0 else 1.0
|
||||||
|
row_norm = row / row_max
|
||||||
|
|
||||||
|
# Account info
|
||||||
|
additional = np.array([
|
||||||
|
self.balance / self.initial_balance,
|
||||||
|
self.shares_held / 100.0, # Assuming max 100 shares for normalization
|
||||||
|
self.cost_basis / (self.initial_balance + 1e-9)
|
||||||
|
], dtype=np.float32)
|
||||||
|
|
||||||
|
# LSTM prediction
|
||||||
|
if self.current_step < self.window_size:
|
||||||
|
predicted_close = 0.0
|
||||||
|
else:
|
||||||
|
seq = self.raw_features[self.current_step - self.window_size: self.current_step]
|
||||||
|
seq_scaled = self.scaler_features.transform(seq)
|
||||||
|
seq_scaled = np.expand_dims(seq_scaled, axis=0) # shape (1, window_size, #features)
|
||||||
|
with self.lstm_lock:
|
||||||
|
pred_scaled = self.lstm_model.predict(seq_scaled, verbose=0).flatten()[0]
|
||||||
|
pred_scaled = np.clip(pred_scaled, 0, 1)
|
||||||
|
unscaled = self.scaler_target.inverse_transform([[pred_scaled]])[0, 0]
|
||||||
|
predicted_close = unscaled / 1000.0 # Adjust normalization as needed
|
||||||
|
|
||||||
|
obs = np.concatenate([row_norm, additional, [predicted_close]]).astype(np.float32)
|
||||||
|
return obs
|
||||||
|
|
||||||
|
def step(self, action):
|
||||||
|
prev_net_worth = self.net_worth
|
||||||
|
current_price = self.df.loc[self.current_step, 'Close']
|
||||||
|
|
||||||
|
if action == 2: # BUY
|
||||||
|
shares_bought = int(self.balance // current_price)
|
||||||
|
if shares_bought > 0:
|
||||||
|
cost = shares_bought * current_price
|
||||||
|
fee = cost * self.transaction_cost
|
||||||
|
self.balance -= (cost + fee)
|
||||||
|
old_shares = self.shares_held
|
||||||
|
self.shares_held += shares_bought
|
||||||
|
self.cost_basis = ((self.cost_basis * old_shares) + (shares_bought * current_price)) / self.shares_held
|
||||||
|
|
||||||
|
elif action == 0: # SELL
|
||||||
|
if self.shares_held > 0:
|
||||||
|
revenue = self.shares_held * current_price
|
||||||
|
fee = revenue * self.transaction_cost
|
||||||
|
self.balance += (revenue - fee)
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
|
||||||
|
self.net_worth = self.balance + self.shares_held * current_price
|
||||||
|
self.current_step += 1
|
||||||
|
done = (self.current_step >= self.max_steps - 1)
|
||||||
|
reward = self.net_worth - prev_net_worth
|
||||||
|
obs = self._get_obs()
|
||||||
|
return obs, reward, done, {}
|
||||||
|
|
||||||
|
def render(self, mode='human'):
|
||||||
|
profit = self.net_worth - self.initial_balance
|
||||||
|
print(f"Step: {self.current_step}, Balance={self.balance:.2f}, Shares={self.shares_held}, NetWorth={self.net_worth:.2f}, Profit={profit:.2f}")
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# DQN Training & Evaluation Functions (Sequential Loop)
|
||||||
|
# ============================
|
||||||
|
def evaluate_dqn_networth(model, env, n_episodes=1):
|
||||||
|
"""
|
||||||
|
Evaluates the trained DQN model by simulating trading over a specified number of episodes.
|
||||||
|
Args:
|
||||||
|
model (DQN): Trained DQN model.
|
||||||
|
env (gym.Env): Trading environment instance.
|
||||||
|
n_episodes (int): Number of episodes to run for evaluation.
|
||||||
|
Returns:
|
||||||
|
float: Average final net worth across episodes.
|
||||||
|
"""
|
||||||
|
final_net_worths = []
|
||||||
|
for _ in range(n_episodes):
|
||||||
|
obs = env.reset()
|
||||||
|
done = False
|
||||||
|
while not done:
|
||||||
|
action, _ = model.predict(obs, deterministic=True)
|
||||||
|
obs, reward, done, info = env.step(action)
|
||||||
|
final_net_worths.append(env.net_worth)
|
||||||
|
return np.mean(final_net_worths)
|
||||||
|
|
||||||
|
def train_and_evaluate_dqn(hyperparams, env_params, total_timesteps, eval_episodes):
|
||||||
|
"""
|
||||||
|
Trains a single DQN agent on an environment (using the frozen LSTM) with given hyperparameters,
|
||||||
|
then evaluates its final net worth.
|
||||||
|
Args:
|
||||||
|
hyperparams (dict): Hyperparameters for the DQN model.
|
||||||
|
env_params (dict): Parameters to create the StockTradingEnvWithLSTM.
|
||||||
|
total_timesteps (int): Total timesteps for training.
|
||||||
|
eval_episodes (int): Number of episodes for evaluation.
|
||||||
|
Returns:
|
||||||
|
agent, final_net_worth
|
||||||
|
"""
|
||||||
|
env = StockTradingEnvWithLSTM(**env_params)
|
||||||
|
vec_env = DummyVecEnv([lambda: env])
|
||||||
|
with dqn_lock:
|
||||||
|
agent = DQN(
|
||||||
|
'MlpPolicy',
|
||||||
|
vec_env,
|
||||||
|
verbose=1,
|
||||||
|
learning_rate=hyperparams['lr'],
|
||||||
|
gamma=hyperparams['gamma'],
|
||||||
|
exploration_fraction=hyperparams['exploration_fraction'],
|
||||||
|
buffer_size=hyperparams['buffer_size'],
|
||||||
|
batch_size=hyperparams['batch_size'],
|
||||||
|
train_freq=4,
|
||||||
|
target_update_interval=1000
|
||||||
|
)
|
||||||
|
agent.learn(total_timesteps=total_timesteps, callback=ActionLoggingCallback(verbose=0))
|
||||||
|
final_net_worth = evaluate_dqn_networth(agent, env, n_episodes=eval_episodes)
|
||||||
|
return agent, final_net_worth
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# MAIN FUNCTION WITH ENHANCED OPTIMIZATIONS
|
||||||
# ============================
|
# ============================
|
||||||
def main():
|
def main():
|
||||||
args = parse_arguments()
|
args = parse_arguments()
|
||||||
@@ -343,24 +521,19 @@ def main():
|
|||||||
dqn_total_timesteps = args.dqn_total_timesteps
|
dqn_total_timesteps = args.dqn_total_timesteps
|
||||||
dqn_eval_episodes = args.dqn_eval_episodes
|
dqn_eval_episodes = args.dqn_eval_episodes
|
||||||
n_trials_lstm = args.n_trials_lstm
|
n_trials_lstm = args.n_trials_lstm
|
||||||
n_trials_dqn = args.n_trials_dqn
|
|
||||||
max_parallel_trials = args.max_parallel_trials
|
|
||||||
preprocess_workers = args.preprocess_workers
|
preprocess_workers = args.preprocess_workers
|
||||||
enable_resource_monitor = args.monitor_resources
|
enable_resource_monitor = args.monitor_resources
|
||||||
|
|
||||||
# =============================
|
# -----------------------------
|
||||||
# Setup Logging
|
# Setup Logging
|
||||||
# =============================
|
# -----------------------------
|
||||||
logging.basicConfig(level=logging.INFO,
|
logging.basicConfig(level=logging.INFO,
|
||||||
format='%(asctime)s - %(levelname)s - %(message)s',
|
format='%(asctime)s - %(levelname)s - %(message)s',
|
||||||
handlers=[
|
handlers=[logging.FileHandler("LSTMDQN.log"), logging.StreamHandler(sys.stdout)])
|
||||||
logging.FileHandler("LSTMDQN.log"),
|
|
||||||
logging.StreamHandler(sys.stdout)
|
|
||||||
])
|
|
||||||
|
|
||||||
# =============================
|
# -----------------------------
|
||||||
# Resource Detection & Logging
|
# Resource Detection & Logging
|
||||||
# =============================
|
# -----------------------------
|
||||||
cpu_stats = get_cpu_info()
|
cpu_stats = get_cpu_info()
|
||||||
gpu_stats = get_gpu_info()
|
gpu_stats = get_gpu_info()
|
||||||
|
|
||||||
@@ -368,25 +541,22 @@ def main():
|
|||||||
logging.info(f"Physical CPU Cores: {cpu_stats['physical_cores']}")
|
logging.info(f"Physical CPU Cores: {cpu_stats['physical_cores']}")
|
||||||
logging.info(f"Logical CPU Cores: {cpu_stats['logical_cores']}")
|
logging.info(f"Logical CPU Cores: {cpu_stats['logical_cores']}")
|
||||||
logging.info(f"CPU Usage per Core: {cpu_stats['cpu_percent']}%")
|
logging.info(f"CPU Usage per Core: {cpu_stats['cpu_percent']}%")
|
||||||
|
|
||||||
if gpu_stats:
|
if gpu_stats:
|
||||||
logging.info("GPU Statistics:")
|
logging.info("GPU Statistics:")
|
||||||
for gpu in gpu_stats:
|
for gpu in gpu_stats:
|
||||||
logging.info(f"GPU {gpu['id']} - {gpu['name']}: Load: {gpu['load']}%, "
|
logging.info(f"GPU {gpu['id']} - {gpu['name']}: Load: {gpu['load']}%, Memory Used: {gpu['memory_used']}MB / {gpu['memory_total']}MB, Temperature: {gpu['temperature']}°C")
|
||||||
f"Memory Used: {gpu['memory_used']}MB / {gpu['memory_total']}MB, "
|
|
||||||
f"Temperature: {gpu['temperature']}°C")
|
|
||||||
else:
|
else:
|
||||||
logging.info("No GPUs detected.")
|
logging.info("No GPUs detected.")
|
||||||
logging.info("=================================")
|
logging.info("=================================")
|
||||||
|
|
||||||
# =============================
|
# -----------------------------
|
||||||
# Configure TensorFlow
|
# Configure TensorFlow
|
||||||
# =============================
|
# -----------------------------
|
||||||
configure_tensorflow(cpu_stats, gpu_stats)
|
configure_tensorflow(cpu_stats, gpu_stats)
|
||||||
|
|
||||||
# =============================
|
# -----------------------------
|
||||||
# Start Resource Monitoring (Optional)
|
# Start Resource Monitoring (Optional)
|
||||||
# =============================
|
# -----------------------------
|
||||||
if enable_resource_monitor:
|
if enable_resource_monitor:
|
||||||
logging.info("Starting real-time resource monitoring...")
|
logging.info("Starting real-time resource monitoring...")
|
||||||
resource_monitor_thread = threading.Thread(target=monitor_resources, args=(60,), daemon=True)
|
resource_monitor_thread = threading.Thread(target=monitor_resources, args=(60,), daemon=True)
|
||||||
@@ -409,11 +579,9 @@ def main():
|
|||||||
|
|
||||||
# 2) Controlled Parallel Data Preprocessing
|
# 2) Controlled Parallel Data Preprocessing
|
||||||
if preprocess_workers is None:
|
if preprocess_workers is None:
|
||||||
# Default to logical cores minus 2 to prevent overloading
|
|
||||||
preprocess_workers = max(1, cpu_stats['logical_cores'] - 2)
|
preprocess_workers = max(1, cpu_stats['logical_cores'] - 2)
|
||||||
else:
|
else:
|
||||||
preprocess_workers = min(preprocess_workers, cpu_stats['logical_cores'])
|
preprocess_workers = min(preprocess_workers, cpu_stats['logical_cores'])
|
||||||
|
|
||||||
df = feature_engineering_parallel(df, num_workers=preprocess_workers)
|
df = feature_engineering_parallel(df, num_workers=preprocess_workers)
|
||||||
|
|
||||||
scaler_features = MinMaxScaler()
|
scaler_features = MinMaxScaler()
|
||||||
@@ -425,7 +593,7 @@ def main():
|
|||||||
X_scaled = scaler_features.fit_transform(X_all)
|
X_scaled = scaler_features.fit_transform(X_all)
|
||||||
y_scaled = scaler_target.fit_transform(y_all).flatten()
|
y_scaled = scaler_target.fit_transform(y_all).flatten()
|
||||||
|
|
||||||
# 3) Create sequences
|
# 3) Create sequences for LSTM
|
||||||
def create_sequences(features, target, window_size):
|
def create_sequences(features, target, window_size):
|
||||||
X_seq, y_seq = [], []
|
X_seq, y_seq = [], []
|
||||||
for i in range(len(features) - window_size):
|
for i in range(len(features) - window_size):
|
||||||
@@ -451,40 +619,12 @@ def main():
|
|||||||
logging.info(f"Scaled validation target shape: {y_val.shape}")
|
logging.info(f"Scaled validation target shape: {y_val.shape}")
|
||||||
logging.info(f"Scaled testing target shape: {y_test.shape}")
|
logging.info(f"Scaled testing target shape: {y_test.shape}")
|
||||||
|
|
||||||
# 5) Build and compile LSTM model
|
# 5) Define the LSTM objective function here (so it has access to X_train, y_train, X_val, y_val)
|
||||||
def build_lstm(input_shape, hyperparams):
|
|
||||||
model = Sequential()
|
|
||||||
num_layers = hyperparams['num_lstm_layers']
|
|
||||||
units = hyperparams['lstm_units']
|
|
||||||
drop = hyperparams['dropout_rate']
|
|
||||||
for i in range(num_layers):
|
|
||||||
return_seqs = (i < num_layers - 1)
|
|
||||||
model.add(Bidirectional(
|
|
||||||
LSTM(units, return_sequences=return_seqs, kernel_regularizer=l2(1e-4)),
|
|
||||||
input_shape=input_shape if i == 0 else None
|
|
||||||
))
|
|
||||||
model.add(Dropout(drop))
|
|
||||||
model.add(Dense(1, activation='linear'))
|
|
||||||
|
|
||||||
opt_name = hyperparams['optimizer']
|
|
||||||
lr = hyperparams['learning_rate']
|
|
||||||
decay = hyperparams['decay']
|
|
||||||
if opt_name == 'Adam':
|
|
||||||
opt = Adam(learning_rate=lr, decay=decay)
|
|
||||||
elif opt_name == 'Nadam':
|
|
||||||
opt = Nadam(learning_rate=lr)
|
|
||||||
else:
|
|
||||||
opt = Adam(learning_rate=lr)
|
|
||||||
|
|
||||||
model.compile(loss=Huber(), optimizer=opt, metrics=['mae'])
|
|
||||||
return model
|
|
||||||
|
|
||||||
# 6) Optuna objective for LSTM
|
|
||||||
def lstm_objective(trial):
|
def lstm_objective(trial):
|
||||||
num_lstm_layers = trial.suggest_int('num_lstm_layers', 1, 3)
|
num_lstm_layers = trial.suggest_int('num_lstm_layers', 1, 3)
|
||||||
lstm_units = trial.suggest_categorical('lstm_units', [32, 64, 96, 128])
|
lstm_units = trial.suggest_categorical('lstm_units', [32, 64, 96, 128])
|
||||||
dropout_rate = trial.suggest_float('dropout_rate', 0.1, 0.5)
|
dropout_rate = trial.suggest_float('dropout_rate', 0.1, 0.5)
|
||||||
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)
|
learning_rate = trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True)
|
||||||
optimizer_name = trial.suggest_categorical('optimizer', ['Adam', 'Nadam'])
|
optimizer_name = trial.suggest_categorical('optimizer', ['Adam', 'Nadam'])
|
||||||
decay = trial.suggest_float('decay', 0.0, 1e-4)
|
decay = trial.suggest_float('decay', 0.0, 1e-4)
|
||||||
|
|
||||||
@@ -513,24 +653,17 @@ def main():
|
|||||||
val_mae = min(history.history['val_mae'])
|
val_mae = min(history.history['val_mae'])
|
||||||
return val_mae
|
return val_mae
|
||||||
|
|
||||||
# 7) Hyperparameter Optimization with Optuna (Parallelized)
|
# 6) Hyperparameter Optimization with Optuna for the LSTM
|
||||||
if max_parallel_trials is None:
|
logging.info(f"Starting LSTM hyperparameter optimization with Optuna using {cpu_stats['logical_cores']-2} parallel trials...")
|
||||||
# Default to logical cores minus 2 to prevent overloading
|
|
||||||
max_parallel_trials = max(1, cpu_stats['logical_cores'] - 2)
|
|
||||||
else:
|
|
||||||
max_parallel_trials = min(max_parallel_trials, cpu_stats['logical_cores'])
|
|
||||||
|
|
||||||
logging.info(f"Starting LSTM hyperparameter optimization with Optuna using {max_parallel_trials} parallel trials...")
|
|
||||||
study_lstm = optuna.create_study(direction='minimize')
|
study_lstm = optuna.create_study(direction='minimize')
|
||||||
study_lstm.optimize(lstm_objective, n_trials=n_trials_lstm, n_jobs=max_parallel_trials)
|
study_lstm.optimize(lstm_objective, n_trials=n_trials_lstm, n_jobs=cpu_stats['logical_cores']-2)
|
||||||
best_lstm_params = study_lstm.best_params
|
best_lstm_params = study_lstm.best_params
|
||||||
logging.info(f"Best LSTM Hyperparameters: {best_lstm_params}")
|
logging.info(f"Best LSTM Hyperparameters: {best_lstm_params}")
|
||||||
|
|
||||||
# 8) Train final LSTM
|
# 7) Train final LSTM with best hyperparameters
|
||||||
final_lstm = build_lstm((X_train.shape[1], X_train.shape[2]), best_lstm_params)
|
final_lstm = build_lstm((X_train.shape[1], X_train.shape[2]), best_lstm_params)
|
||||||
early_stop_final = EarlyStopping(monitor='val_loss', patience=20, restore_best_weights=True)
|
early_stop_final = EarlyStopping(monitor='val_loss', patience=20, restore_best_weights=True)
|
||||||
lr_reduce_final = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6)
|
lr_reduce_final = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6)
|
||||||
|
|
||||||
logging.info("Training best LSTM model with optimized hyperparameters...")
|
logging.info("Training best LSTM model with optimized hyperparameters...")
|
||||||
hist = final_lstm.fit(
|
hist = final_lstm.fit(
|
||||||
X_train, y_train,
|
X_train, y_train,
|
||||||
@@ -541,8 +674,8 @@ def main():
|
|||||||
verbose=1
|
verbose=1
|
||||||
)
|
)
|
||||||
|
|
||||||
# 9) Evaluate LSTM
|
# 8) Evaluate LSTM
|
||||||
def evaluate_lstm(model, X_test, y_test):
|
def evaluate_final_lstm(model, X_test, y_test):
|
||||||
logging.info("Evaluating final LSTM model...")
|
logging.info("Evaluating final LSTM model...")
|
||||||
y_pred_scaled = model.predict(X_test).flatten()
|
y_pred_scaled = model.predict(X_test).flatten()
|
||||||
y_pred_scaled = np.clip(y_pred_scaled, 0, 1)
|
y_pred_scaled = np.clip(y_pred_scaled, 0, 1)
|
||||||
@@ -564,7 +697,6 @@ def main():
|
|||||||
logging.info(f"Test R2 Score: {r2_:.4f}")
|
logging.info(f"Test R2 Score: {r2_:.4f}")
|
||||||
logging.info(f"Directional Accuracy: {directional_accuracy:.4f}")
|
logging.info(f"Directional Accuracy: {directional_accuracy:.4f}")
|
||||||
|
|
||||||
# Plot Actual vs Predicted
|
|
||||||
plt.figure(figsize=(14, 7))
|
plt.figure(figsize=(14, 7))
|
||||||
plt.plot(y_test_actual, label='Actual Price')
|
plt.plot(y_test_actual, label='Actual Price')
|
||||||
plt.plot(y_pred, label='Predicted Price')
|
plt.plot(y_pred, label='Predicted Price')
|
||||||
@@ -574,7 +706,6 @@ def main():
|
|||||||
plt.savefig('lstm_actual_vs_pred.png')
|
plt.savefig('lstm_actual_vs_pred.png')
|
||||||
plt.close()
|
plt.close()
|
||||||
|
|
||||||
# Tabulate first 40 results
|
|
||||||
table = []
|
table = []
|
||||||
limit = min(40, len(y_test_actual))
|
limit = min(40, len(y_test_actual))
|
||||||
for i in range(limit):
|
for i in range(limit):
|
||||||
@@ -584,271 +715,76 @@ def main():
|
|||||||
print(tabulate(table, headers=headers, tablefmt="pretty"))
|
print(tabulate(table, headers=headers, tablefmt="pretty"))
|
||||||
return r2_, directional_accuracy
|
return r2_, directional_accuracy
|
||||||
|
|
||||||
_r2, _diracc = evaluate_lstm(final_lstm, X_test, y_test)
|
_r2, _diracc = evaluate_final_lstm(final_lstm, X_test, y_test)
|
||||||
|
|
||||||
# 10) Save LSTM and Scalers
|
# 9) Save final LSTM model and scalers
|
||||||
final_lstm.save('best_lstm_model.h5')
|
final_lstm.save('best_lstm_model.h5')
|
||||||
joblib.dump(scaler_features, 'scaler_features.pkl')
|
joblib.dump(scaler_features, 'scaler_features.pkl')
|
||||||
joblib.dump(scaler_target, 'scaler_target.pkl')
|
joblib.dump(scaler_target, 'scaler_target.pkl')
|
||||||
logging.info("Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).")
|
logging.info("Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).")
|
||||||
|
|
||||||
############################################################
|
############################################################
|
||||||
# B) DQN PART: BUILD ENV THAT USES THE LSTM + FORECAST
|
# B) DQN PART: BUILD ENV THAT USES THE FROZEN LSTM + FORECAST
|
||||||
############################################################
|
############################################################
|
||||||
class StockTradingEnvWithLSTM(gym.Env):
|
# (StockTradingEnvWithLSTM is defined above)
|
||||||
"""
|
|
||||||
A custom OpenAI Gym environment for stock trading that integrates LSTM model predictions.
|
|
||||||
Observation includes technical indicators, account information, and predicted next close price.
|
|
||||||
"""
|
|
||||||
metadata = {'render.modes': ['human']}
|
|
||||||
|
|
||||||
def __init__(self, df, feature_columns, lstm_model, scaler_features, scaler_target,
|
|
||||||
window_size=15, initial_balance=10000, transaction_cost=0.001):
|
|
||||||
super(StockTradingEnvWithLSTM, self).__init__()
|
|
||||||
self.df = df.reset_index(drop=True)
|
|
||||||
self.feature_columns = feature_columns
|
|
||||||
self.lstm_model = lstm_model
|
|
||||||
self.scaler_features = scaler_features
|
|
||||||
self.scaler_target = scaler_target
|
|
||||||
self.window_size = window_size
|
|
||||||
|
|
||||||
self.initial_balance = initial_balance
|
|
||||||
self.balance = initial_balance
|
|
||||||
self.net_worth = initial_balance
|
|
||||||
self.transaction_cost = transaction_cost
|
|
||||||
|
|
||||||
self.max_steps = len(df)
|
|
||||||
self.current_step = 0
|
|
||||||
self.shares_held = 0
|
|
||||||
self.cost_basis = 0
|
|
||||||
|
|
||||||
# Raw array of features
|
|
||||||
self.raw_features = df[feature_columns].values
|
|
||||||
|
|
||||||
# Action space: 0=Sell, 1=Hold, 2=Buy
|
|
||||||
self.action_space = spaces.Discrete(3)
|
|
||||||
|
|
||||||
# Observation space: [technical indicators, balance, shares, cost_basis, predicted_next_close]
|
|
||||||
self.observation_space = spaces.Box(
|
|
||||||
low=0, high=1,
|
|
||||||
shape=(len(feature_columns) + 3 + 1,),
|
|
||||||
dtype=np.float32
|
|
||||||
)
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
self.balance = self.initial_balance
|
|
||||||
self.net_worth = self.initial_balance
|
|
||||||
self.current_step = 0
|
|
||||||
self.shares_held = 0
|
|
||||||
self.cost_basis = 0
|
|
||||||
return self._get_obs()
|
|
||||||
|
|
||||||
def _get_obs(self):
|
|
||||||
row = self.raw_features[self.current_step]
|
|
||||||
row_max = np.max(row) if np.max(row) != 0 else 1.0
|
|
||||||
row_norm = row / row_max
|
|
||||||
|
|
||||||
# Account info
|
|
||||||
additional = np.array([
|
|
||||||
self.balance / self.initial_balance,
|
|
||||||
self.shares_held / 100.0, # Assuming max 100 shares for normalization
|
|
||||||
self.cost_basis / (self.initial_balance + 1e-9)
|
|
||||||
], dtype=np.float32)
|
|
||||||
|
|
||||||
# LSTM prediction
|
|
||||||
if self.current_step < self.window_size:
|
|
||||||
# Not enough history => no forecast
|
|
||||||
predicted_close = 0.0
|
|
||||||
else:
|
|
||||||
seq = self.raw_features[self.current_step - self.window_size: self.current_step]
|
|
||||||
seq_scaled = self.scaler_features.transform(seq)
|
|
||||||
seq_scaled = np.expand_dims(seq_scaled, axis=0) # shape (1, window_size, #features)
|
|
||||||
pred_scaled = self.lstm_model.predict(seq_scaled, verbose=0).flatten()[0]
|
|
||||||
pred_scaled = np.clip(pred_scaled, 0, 1)
|
|
||||||
unscaled = self.scaler_target.inverse_transform([[pred_scaled]])[0, 0]
|
|
||||||
# Normalize predicted close price (assuming a typical price range)
|
|
||||||
predicted_close = unscaled / 1000.0
|
|
||||||
|
|
||||||
obs = np.concatenate([row_norm, additional, [predicted_close]]).astype(np.float32)
|
|
||||||
return obs
|
|
||||||
|
|
||||||
def step(self, action):
|
|
||||||
prev_net_worth = self.net_worth
|
|
||||||
current_price = self.df.loc[self.current_step, 'Close']
|
|
||||||
|
|
||||||
if action == 2: # BUY
|
|
||||||
shares_bought = int(self.balance // current_price)
|
|
||||||
if shares_bought > 0:
|
|
||||||
cost = shares_bought * current_price
|
|
||||||
fee = cost * self.transaction_cost
|
|
||||||
self.balance -= (cost + fee)
|
|
||||||
old_shares = self.shares_held
|
|
||||||
self.shares_held += shares_bought
|
|
||||||
self.cost_basis = (
|
|
||||||
(self.cost_basis * old_shares) + (shares_bought * current_price)
|
|
||||||
) / self.shares_held
|
|
||||||
|
|
||||||
elif action == 0: # SELL
|
|
||||||
if self.shares_held > 0:
|
|
||||||
revenue = self.shares_held * current_price
|
|
||||||
fee = revenue * self.transaction_cost
|
|
||||||
self.balance += (revenue - fee)
|
|
||||||
self.shares_held = 0
|
|
||||||
self.cost_basis = 0
|
|
||||||
|
|
||||||
self.net_worth = self.balance + self.shares_held * current_price
|
|
||||||
self.current_step += 1
|
|
||||||
done = (self.current_step >= self.max_steps - 1)
|
|
||||||
|
|
||||||
reward = self.net_worth - self.initial_balance
|
|
||||||
obs = self._get_obs()
|
|
||||||
return obs, reward, done, {}
|
|
||||||
|
|
||||||
def render(self, mode='human'):
|
|
||||||
profit = self.net_worth - self.initial_balance
|
|
||||||
print(f"Step: {self.current_step}, "
|
|
||||||
f"Balance={self.balance:.2f}, "
|
|
||||||
f"Shares={self.shares_held}, "
|
|
||||||
f"NetWorth={self.net_worth:.2f}, "
|
|
||||||
f"Profit={profit:.2f}")
|
|
||||||
|
|
||||||
###################################
|
###################################
|
||||||
# C) DQN HYPERPARAMETER TUNING WITH LSTM
|
# C) SEQUENTIAL DQN TRAINING WITH LSTM INTEGRATION
|
||||||
###################################
|
###################################
|
||||||
from stable_baselines3.common.evaluation import evaluate_policy
|
env_params = {
|
||||||
|
'df': df,
|
||||||
|
'feature_columns': feature_columns,
|
||||||
|
'lstm_model': final_lstm, # Use the frozen, best LSTM model
|
||||||
|
'scaler_features': scaler_features,
|
||||||
|
'scaler_target': scaler_target,
|
||||||
|
'window_size': lstm_window_size,
|
||||||
|
'initial_balance': 10000,
|
||||||
|
'transaction_cost': 0.001
|
||||||
|
}
|
||||||
|
|
||||||
def evaluate_dqn_networth(model, env, n_episodes=1):
|
# Base DQN hyperparameters (adjust as needed)
|
||||||
"""
|
base_hyperparams = {
|
||||||
Evaluates the trained DQN model by simulating trading over a specified number of episodes.
|
'lr': 1e-3,
|
||||||
|
'gamma': 0.95,
|
||||||
Args:
|
'exploration_fraction': 0.1,
|
||||||
model (stable_baselines3.DQN): Trained DQN model.
|
'buffer_size': 10000,
|
||||||
env (gym.Env): Trading environment instance.
|
'batch_size': 64
|
||||||
n_episodes (int): Number of episodes to run for evaluation.
|
}
|
||||||
|
|
||||||
Returns:
|
|
||||||
float: Average final net worth across episodes.
|
|
||||||
"""
|
|
||||||
final_net_worths = []
|
|
||||||
for _ in range(n_episodes):
|
|
||||||
obs = env.reset()
|
|
||||||
done = False
|
|
||||||
while not done:
|
|
||||||
action, _ = model.predict(obs, deterministic=True)
|
|
||||||
obs, reward, done, info = env.step(action)
|
|
||||||
final_net_worths.append(env.net_worth)
|
|
||||||
return np.mean(final_net_worths)
|
|
||||||
|
|
||||||
def dqn_objective(trial):
|
# Define performance threshold (final net worth must be above this)
|
||||||
"""
|
PERFORMANCE_THRESHOLD = 10500.0
|
||||||
Objective function for Optuna to optimize DQN hyperparameters.
|
current_hyperparams = base_hyperparams.copy()
|
||||||
Minimizes the negative of the final net worth achieved by the DQN agent.
|
max_attempts = 10
|
||||||
|
best_agent = None
|
||||||
Args:
|
|
||||||
trial (optuna.trial.Trial): Optuna trial object.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
float: Negative of the final net worth.
|
|
||||||
"""
|
|
||||||
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
|
|
||||||
gamma = trial.suggest_float("gamma", 0.8, 0.9999)
|
|
||||||
exploration_fraction = trial.suggest_float("exploration_fraction", 0.01, 0.3)
|
|
||||||
buffer_size = trial.suggest_categorical("buffer_size", [5000, 10000, 20000])
|
|
||||||
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128])
|
|
||||||
|
|
||||||
# Initialize environment
|
for attempt in range(max_attempts):
|
||||||
env = StockTradingEnvWithLSTM(
|
logging.info(f"Training DQN agent: Attempt {attempt+1} with hyperparameters: {current_hyperparams}")
|
||||||
df=df,
|
agent, net_worth = train_and_evaluate_dqn(current_hyperparams, env_params,
|
||||||
feature_columns=feature_columns,
|
total_timesteps=dqn_total_timesteps,
|
||||||
lstm_model=final_lstm, # Use the trained LSTM model
|
eval_episodes=dqn_eval_episodes)
|
||||||
scaler_features=scaler_features,
|
logging.info(f"Agent achieved final net worth: ${net_worth:.2f}")
|
||||||
scaler_target=scaler_target,
|
if net_worth >= PERFORMANCE_THRESHOLD:
|
||||||
window_size=lstm_window_size
|
logging.info("Agent meets performance criteria!")
|
||||||
)
|
best_agent = agent
|
||||||
vec_env = DummyVecEnv([lambda: env])
|
best_agent.save("best_dqn_model_lstm.zip")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
logging.info("Performance below threshold. Adjusting hyperparameters and retrying...")
|
||||||
|
current_hyperparams['lr'] *= 0.9 # decrease learning rate by 10%
|
||||||
|
current_hyperparams['exploration_fraction'] = min(current_hyperparams['exploration_fraction'] + 0.02, 0.3)
|
||||||
|
|
||||||
# Initialize DQN model
|
if best_agent is None:
|
||||||
dqn_action_logger = ActionLoggingCallback(verbose=0)
|
logging.warning("Failed to train a satisfactory DQN agent after multiple attempts. Using last trained model")
|
||||||
|
best_agent = agent
|
||||||
model = DQN(
|
|
||||||
'MlpPolicy',
|
|
||||||
vec_env,
|
|
||||||
verbose=0,
|
|
||||||
learning_rate=lr,
|
|
||||||
gamma=gamma,
|
|
||||||
exploration_fraction=exploration_fraction,
|
|
||||||
buffer_size=buffer_size,
|
|
||||||
batch_size=batch_size,
|
|
||||||
train_freq=4,
|
|
||||||
target_update_interval=1000
|
|
||||||
)
|
|
||||||
|
|
||||||
# Train DQN model
|
|
||||||
model.learn(total_timesteps=dqn_total_timesteps, callback=dqn_action_logger)
|
|
||||||
|
|
||||||
# Evaluate final net worth
|
|
||||||
final_net_worth = evaluate_dqn_networth(model, env, n_episodes=dqn_eval_episodes)
|
|
||||||
# Objective is to maximize net worth, so return negative
|
|
||||||
return -final_net_worth
|
|
||||||
|
|
||||||
# 11) Hyperparameter Optimization with Optuna (Parallelized)
|
|
||||||
if max_parallel_trials is None:
|
|
||||||
# Default to logical cores minus 2 to prevent overloading
|
|
||||||
max_parallel_trials = max(1, cpu_stats['logical_cores'] - 2)
|
|
||||||
else:
|
else:
|
||||||
max_parallel_trials = min(max_parallel_trials, cpu_stats['logical_cores'])
|
logging.info("Final DQN agent trained and saved.")
|
||||||
|
|
||||||
logging.info(f"Starting DQN hyperparameter tuning with Optuna using {max_parallel_trials} parallel trials...")
|
|
||||||
study_dqn = optuna.create_study(direction='minimize')
|
|
||||||
study_dqn.optimize(dqn_objective, n_trials=n_trials_dqn, n_jobs=max_parallel_trials)
|
|
||||||
best_dqn_params = study_dqn.best_params
|
|
||||||
logging.info(f"Best DQN Hyperparameters: {best_dqn_params}")
|
|
||||||
|
|
||||||
###################################
|
###################################
|
||||||
# D) TRAIN FINAL DQN WITH BEST PARAMETERS
|
# D) FINAL INFERENCE & LOG RESULTS
|
||||||
###################################
|
|
||||||
logging.info("Training final DQN model with best hyperparameters...")
|
|
||||||
env_final = StockTradingEnvWithLSTM(
|
|
||||||
df=df,
|
|
||||||
feature_columns=feature_columns,
|
|
||||||
lstm_model=final_lstm,
|
|
||||||
scaler_features=scaler_features,
|
|
||||||
scaler_target=scaler_target,
|
|
||||||
window_size=lstm_window_size
|
|
||||||
)
|
|
||||||
vec_env_final = DummyVecEnv([lambda: env_final])
|
|
||||||
|
|
||||||
final_dqn_logger = ActionLoggingCallback(verbose=1) # Enable detailed logging
|
|
||||||
|
|
||||||
final_model = DQN(
|
|
||||||
'MlpPolicy',
|
|
||||||
vec_env_final,
|
|
||||||
verbose=1,
|
|
||||||
learning_rate=best_dqn_params['lr'],
|
|
||||||
gamma=best_dqn_params['gamma'],
|
|
||||||
exploration_fraction=best_dqn_params['exploration_fraction'],
|
|
||||||
buffer_size=best_dqn_params['buffer_size'],
|
|
||||||
batch_size=best_dqn_params['batch_size'],
|
|
||||||
train_freq=4,
|
|
||||||
target_update_interval=1000
|
|
||||||
)
|
|
||||||
final_model.learn(total_timesteps=dqn_total_timesteps, callback=final_dqn_logger)
|
|
||||||
final_model.save("best_dqn_model_lstm.zip")
|
|
||||||
logging.info("Final DQN model trained and saved as 'best_dqn_model_lstm.zip'.")
|
|
||||||
|
|
||||||
###################################
|
|
||||||
# E) FINAL INFERENCE & LOG RESULTS
|
|
||||||
###################################
|
###################################
|
||||||
logging.info("Running final inference with the trained DQN model...")
|
logging.info("Running final inference with the trained DQN model...")
|
||||||
|
|
||||||
env_test = StockTradingEnvWithLSTM(
|
env_test = StockTradingEnvWithLSTM(**env_params)
|
||||||
df=df,
|
|
||||||
feature_columns=feature_columns,
|
|
||||||
lstm_model=final_lstm,
|
|
||||||
scaler_features=scaler_features,
|
|
||||||
scaler_target=scaler_target,
|
|
||||||
window_size=lstm_window_size
|
|
||||||
)
|
|
||||||
obs = env_test.reset()
|
obs = env_test.reset()
|
||||||
done = False
|
done = False
|
||||||
total_reward = 0.0
|
total_reward = 0.0
|
||||||
@@ -857,7 +793,7 @@ def main():
|
|||||||
|
|
||||||
while not done:
|
while not done:
|
||||||
step_count += 1
|
step_count += 1
|
||||||
action, _ = final_model.predict(obs, deterministic=True)
|
action, _ = best_agent.predict(obs, deterministic=True)
|
||||||
obs, reward, done, info = env_test.step(action)
|
obs, reward, done, info = env_test.step(action)
|
||||||
total_reward += reward
|
total_reward += reward
|
||||||
step_data.append({
|
step_data.append({
|
||||||
@@ -902,31 +838,24 @@ def main():
|
|||||||
logging.info("Final inference completed. Results logged and displayed.")
|
logging.info("Final inference completed. Results logged and displayed.")
|
||||||
|
|
||||||
###################################
|
###################################
|
||||||
# F) OPTIONAL: RETRY LOOP IF NET WORTH < THRESHOLD
|
# E) OPTIONAL: RETRY LOOP IF NET WORTH < THRESHOLD
|
||||||
###################################
|
###################################
|
||||||
NET_WORTH_THRESHOLD = 10500.0 # example threshold
|
if final_net_worth < PERFORMANCE_THRESHOLD:
|
||||||
|
logging.warning(f"Final net worth (${final_net_worth:.2f}) is below ${PERFORMANCE_THRESHOLD:.2f}. Retraining the same DQN model to learn from mistakes...")
|
||||||
|
|
||||||
if final_net_worth < NET_WORTH_THRESHOLD:
|
|
||||||
logging.warning(f"Final net worth (${final_net_worth:.2f}) is below ${NET_WORTH_THRESHOLD:.2f}. Retraining the same DQN model to learn from mistakes...")
|
|
||||||
|
|
||||||
# We continue training the SAME final_model without resetting its replay buffer.
|
|
||||||
# By setting `reset_num_timesteps=False`, we keep the replay buffer and learned weights.
|
|
||||||
additional_timesteps = 50000
|
additional_timesteps = 50000
|
||||||
logging.info(f"Retraining the existing DQN model for an additional {additional_timesteps} timesteps (keeping old experiences).")
|
logging.info(f"Retraining the existing DQN model for an additional {additional_timesteps} timesteps (keeping old experiences).")
|
||||||
|
best_agent.learn(
|
||||||
# If you want to see action distributions again, you can keep the same callback or define a new one:
|
|
||||||
final_model.learn(
|
|
||||||
total_timesteps=additional_timesteps,
|
total_timesteps=additional_timesteps,
|
||||||
reset_num_timesteps=False, # Keep replay buffer + internal step counter
|
reset_num_timesteps=False, # Keep replay buffer + internal step counter
|
||||||
callback=final_dqn_logger # Optional: to log actions again
|
callback=ActionLoggingCallback(verbose=1)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Evaluate again
|
|
||||||
obs = env_test.reset()
|
obs = env_test.reset()
|
||||||
done = False
|
done = False
|
||||||
second_total_reward = 0.0
|
second_total_reward = 0.0
|
||||||
while not done:
|
while not done:
|
||||||
action, _ = final_model.predict(obs, deterministic=True)
|
action, _ = best_agent.predict(obs, deterministic=True)
|
||||||
obs, reward, done, info = env_test.step(action)
|
obs, reward, done, info = env_test.step(action)
|
||||||
second_total_reward += reward
|
second_total_reward += reward
|
||||||
|
|
||||||
@@ -934,9 +863,9 @@ def main():
|
|||||||
second_profit = second_net_worth - env_test.initial_balance
|
second_profit = second_net_worth - env_test.initial_balance
|
||||||
logging.info(f"After additional training, new final net worth=${second_net_worth:.2f}, profit=${second_profit:.2f}")
|
logging.info(f"After additional training, new final net worth=${second_net_worth:.2f}, profit=${second_profit:.2f}")
|
||||||
|
|
||||||
if second_net_worth < NET_WORTH_THRESHOLD:
|
if second_net_worth < PERFORMANCE_THRESHOLD:
|
||||||
logging.warning("Even after continued training, net worth is still below threshold. Consider a deeper hyperparameter search or analyzing the environment settings.")
|
logging.warning("Even after continued training, net worth is still below threshold. Consider a deeper hyperparameter search or analyzing the environment settings.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|
||||||
|
|||||||
871
src/Machine-Learning/LSTM-python/src/backups/LSTMDQNbadreward.py
Normal file
871
src/Machine-Learning/LSTM-python/src/backups/LSTMDQNbadreward.py
Normal file
@@ -0,0 +1,871 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import argparse
|
||||||
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
|
import logging
|
||||||
|
from tabulate import tabulate
|
||||||
|
import matplotlib
|
||||||
|
matplotlib.use("Agg")
|
||||||
|
import matplotlib.pyplot as plt
|
||||||
|
import seaborn as sns
|
||||||
|
import psutil
|
||||||
|
import GPUtil
|
||||||
|
import tensorflow as tf
|
||||||
|
from tensorflow.keras.models import Sequential, load_model
|
||||||
|
from tensorflow.keras.layers import LSTM, Dense, Dropout, Bidirectional
|
||||||
|
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
|
||||||
|
from tensorflow.keras.losses import Huber
|
||||||
|
from tensorflow.keras.regularizers import l2
|
||||||
|
from tensorflow.keras.optimizers import Adam, Nadam
|
||||||
|
|
||||||
|
from sklearn.preprocessing import MinMaxScaler
|
||||||
|
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
|
||||||
|
import joblib
|
||||||
|
|
||||||
|
import optuna
|
||||||
|
from optuna.integration import KerasPruningCallback
|
||||||
|
|
||||||
|
import gym
|
||||||
|
from gym import spaces
|
||||||
|
from stable_baselines3 import DQN
|
||||||
|
from stable_baselines3.common.vec_env import DummyVecEnv
|
||||||
|
from stable_baselines3.common.callbacks import BaseCallback
|
||||||
|
|
||||||
|
from multiprocessing import Pool, cpu_count
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Suppress TensorFlow logs beyond errors
|
||||||
|
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# GLOBAL LOCK FOR DQN TRAINING (to force one-at-a-time usage of the shared LSTM)
|
||||||
|
# =============================================================================
|
||||||
|
dqn_lock = threading.Lock()
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Resource Detection Functions
|
||||||
|
# ============================
|
||||||
|
def get_cpu_info():
|
||||||
|
cpu_count = psutil.cpu_count(logical=False) # Physical cores
|
||||||
|
cpu_count_logical = psutil.cpu_count(logical=True) # Logical cores
|
||||||
|
cpu_percent = psutil.cpu_percent(interval=1, percpu=True)
|
||||||
|
return {
|
||||||
|
'physical_cores': cpu_count,
|
||||||
|
'logical_cores': cpu_count_logical,
|
||||||
|
'cpu_percent': cpu_percent
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_gpu_info():
|
||||||
|
gpus = GPUtil.getGPUs()
|
||||||
|
gpu_info = []
|
||||||
|
for gpu in gpus:
|
||||||
|
gpu_info.append({
|
||||||
|
'id': gpu.id,
|
||||||
|
'name': gpu.name,
|
||||||
|
'load': gpu.load * 100, # Convert to percentage
|
||||||
|
'memory_total': gpu.memoryTotal,
|
||||||
|
'memory_used': gpu.memoryUsed,
|
||||||
|
'memory_free': gpu.memoryFree,
|
||||||
|
'temperature': gpu.temperature
|
||||||
|
})
|
||||||
|
return gpu_info
|
||||||
|
|
||||||
|
def configure_tensorflow(cpu_stats, gpu_stats):
|
||||||
|
logical_cores = cpu_stats['logical_cores']
|
||||||
|
os.environ["OMP_NUM_THREADS"] = str(logical_cores)
|
||||||
|
os.environ["TF_NUM_INTRAOP_THREADS"] = str(logical_cores)
|
||||||
|
os.environ["TF_NUM_INTEROP_THREADS"] = str(logical_cores)
|
||||||
|
|
||||||
|
if gpu_stats:
|
||||||
|
gpus = tf.config.list_physical_devices('GPU')
|
||||||
|
if gpus:
|
||||||
|
try:
|
||||||
|
for gpu in gpus:
|
||||||
|
tf.config.experimental.set_memory_growth(gpu, True)
|
||||||
|
logging.info(f"Enabled memory growth for {len(gpus)} GPU(s).")
|
||||||
|
except RuntimeError as e:
|
||||||
|
logging.error(f"TensorFlow GPU configuration error: {e}")
|
||||||
|
else:
|
||||||
|
tf.config.threading.set_intra_op_parallelism_threads(logical_cores)
|
||||||
|
tf.config.threading.set_inter_op_parallelism_threads(logical_cores)
|
||||||
|
logging.info("Configured TensorFlow to use CPU with optimized thread settings.")
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Resource Monitoring Function (Optional)
|
||||||
|
# ============================
|
||||||
|
def monitor_resources(interval=60):
|
||||||
|
while True:
|
||||||
|
cpu = psutil.cpu_percent(interval=1, percpu=True)
|
||||||
|
gpu = get_gpu_info()
|
||||||
|
logging.info(f"CPU Usage per Core: {cpu}%")
|
||||||
|
if gpu:
|
||||||
|
for gpu_stat in gpu:
|
||||||
|
logging.info(f"GPU {gpu_stat['id']} - {gpu_stat['name']}: Load: {gpu_stat['load']}%, "
|
||||||
|
f"Memory Used: {gpu_stat['memory_used']}MB / {gpu_stat['memory_total']}MB, "
|
||||||
|
f"Temperature: {gpu_stat['temperature']}°C")
|
||||||
|
else:
|
||||||
|
logging.info("No GPUs detected.")
|
||||||
|
logging.info("-" * 50)
|
||||||
|
time.sleep(interval)
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Data Loading & Technical Indicators
|
||||||
|
# ============================
|
||||||
|
def load_data(file_path):
|
||||||
|
logging.info(f"Loading data from: {file_path}")
|
||||||
|
try:
|
||||||
|
df = pd.read_csv(file_path, parse_dates=['time'])
|
||||||
|
except FileNotFoundError:
|
||||||
|
logging.error(f"File not found: {file_path}")
|
||||||
|
sys.exit(1)
|
||||||
|
except pd.errors.ParserError as e:
|
||||||
|
logging.error(f"Error parsing CSV file: {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Unexpected error: {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
rename_mapping = {
|
||||||
|
'time': 'Date',
|
||||||
|
'open': 'Open',
|
||||||
|
'high': 'High',
|
||||||
|
'low': 'Low',
|
||||||
|
'close': 'Close'
|
||||||
|
}
|
||||||
|
df.rename(columns=rename_mapping, inplace=True)
|
||||||
|
logging.info(f"Data columns after renaming: {df.columns.tolist()}")
|
||||||
|
df.sort_values('Date', inplace=True)
|
||||||
|
df.reset_index(drop=True, inplace=True)
|
||||||
|
logging.info("Data loaded and sorted successfully.")
|
||||||
|
return df
|
||||||
|
|
||||||
|
def compute_rsi(series, window=14):
|
||||||
|
delta = series.diff()
|
||||||
|
gain = delta.where(delta > 0, 0).rolling(window=window).mean()
|
||||||
|
loss = -delta.where(delta < 0, 0).rolling(window=window).mean()
|
||||||
|
RS = gain / (loss + 1e-9)
|
||||||
|
return 100 - (100 / (1 + RS))
|
||||||
|
|
||||||
|
def compute_macd(series, span_short=12, span_long=26, span_signal=9):
|
||||||
|
ema_short = series.ewm(span=span_short, adjust=False).mean()
|
||||||
|
ema_long = series.ewm(span=span_long, adjust=False).mean()
|
||||||
|
macd_line = ema_short - ema_long
|
||||||
|
signal_line = macd_line.ewm(span=span_signal, adjust=False).mean()
|
||||||
|
return macd_line - signal_line # histogram
|
||||||
|
|
||||||
|
def compute_obv(df):
|
||||||
|
signed_volume = (np.sign(df['Close'].diff()) * df['Volume']).fillna(0)
|
||||||
|
return signed_volume.cumsum()
|
||||||
|
|
||||||
|
def compute_adx(df, window=14):
|
||||||
|
df['H-L'] = df['High'] - df['Low']
|
||||||
|
df['H-Cp'] = (df['High'] - df['Close'].shift(1)).abs()
|
||||||
|
df['L-Cp'] = (df['Low'] - df['Close'].shift(1)).abs()
|
||||||
|
tr = df[['H-L','H-Cp','L-Cp']].max(axis=1)
|
||||||
|
tr_rolling = tr.rolling(window=window).mean()
|
||||||
|
adx_placeholder = tr_rolling / (df['Close'] + 1e-9)
|
||||||
|
df.drop(['H-L','H-Cp','L-Cp'], axis=1, inplace=True)
|
||||||
|
return adx_placeholder
|
||||||
|
|
||||||
|
def compute_bollinger_bands(series, window=20, num_std=2):
|
||||||
|
sma = series.rolling(window=window).mean()
|
||||||
|
std = series.rolling(window=window).std()
|
||||||
|
upper = sma + num_std * std
|
||||||
|
lower = sma - num_std * std
|
||||||
|
bandwidth = (upper - lower) / (sma + 1e-9)
|
||||||
|
return upper, lower, bandwidth
|
||||||
|
|
||||||
|
def compute_mfi(df, window=14):
|
||||||
|
typical_price = (df['High'] + df['Low'] + df['Close']) / 3
|
||||||
|
money_flow = typical_price * df['Volume']
|
||||||
|
prev_tp = typical_price.shift(1)
|
||||||
|
flow_pos = money_flow.where(typical_price > prev_tp, 0)
|
||||||
|
flow_neg = money_flow.where(typical_price < prev_tp, 0)
|
||||||
|
pos_sum = flow_pos.rolling(window=window).sum()
|
||||||
|
neg_sum = flow_neg.rolling(window=window).sum()
|
||||||
|
mfi = 100 - (100 / (1 + pos_sum / (neg_sum + 1e-9)))
|
||||||
|
return mfi
|
||||||
|
|
||||||
|
def calculate_technical_indicators(df):
|
||||||
|
logging.info("Calculating technical indicators...")
|
||||||
|
df['RSI'] = compute_rsi(df['Close'], 14)
|
||||||
|
df['MACD'] = compute_macd(df['Close'])
|
||||||
|
df['OBV'] = compute_obv(df)
|
||||||
|
df['ADX'] = compute_adx(df)
|
||||||
|
|
||||||
|
up, lo, bw = compute_bollinger_bands(df['Close'], 20, 2)
|
||||||
|
df['BB_Upper'] = up
|
||||||
|
df['BB_Lower'] = lo
|
||||||
|
df['BB_Width'] = bw
|
||||||
|
|
||||||
|
df['MFI'] = compute_mfi(df, 14)
|
||||||
|
df['SMA_5'] = df['Close'].rolling(5).mean()
|
||||||
|
df['SMA_10'] = df['Close'].rolling(10).mean()
|
||||||
|
df['EMA_5'] = df['Close'].ewm(span=5, adjust=False).mean()
|
||||||
|
df['EMA_10'] = df['Close'].ewm(span=10, adjust=False).mean()
|
||||||
|
df['STDDEV_5'] = df['Close'].rolling(5).std()
|
||||||
|
|
||||||
|
df.dropna(inplace=True)
|
||||||
|
logging.info("Technical indicators calculated successfully.")
|
||||||
|
return df
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Argument Parsing
|
||||||
|
# ============================
|
||||||
|
def parse_arguments():
|
||||||
|
parser = argparse.ArgumentParser(description='All-in-One: LSTM + DQN (with LSTM predictions) + Tuning.')
|
||||||
|
parser.add_argument('csv_path', type=str,
|
||||||
|
help='Path to CSV data with columns [time, open, high, low, close, volume].')
|
||||||
|
parser.add_argument('--lstm_window_size', type=int, default=15,
|
||||||
|
help='Sequence window size for LSTM. Default=15.')
|
||||||
|
parser.add_argument('--dqn_total_timesteps', type=int, default=50000,
|
||||||
|
help='Total timesteps to train each DQN candidate. Default=50000.')
|
||||||
|
parser.add_argument('--dqn_eval_episodes', type=int, default=1,
|
||||||
|
help='Number of episodes to evaluate DQN in the tuning step. Default=1 (entire dataset once).')
|
||||||
|
parser.add_argument('--n_trials_lstm', type=int, default=30,
|
||||||
|
help='Number of Optuna trials for LSTM. Default=30.')
|
||||||
|
# The following arguments are no longer used in sequential DQN training:
|
||||||
|
parser.add_argument('--n_trials_dqn', type=int, default=20,
|
||||||
|
help='(Unused in sequential DQN training)')
|
||||||
|
parser.add_argument('--max_parallel_trials', type=int, default=None,
|
||||||
|
help='(Unused in sequential DQN training)')
|
||||||
|
parser.add_argument('--preprocess_workers', type=int, default=None,
|
||||||
|
help='Number of worker processes for data preprocessing. Defaults to (logical cores - 2).')
|
||||||
|
parser.add_argument('--monitor_resources', action='store_true',
|
||||||
|
help='Enable real-time resource monitoring.')
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Custom DQN Callback: Log Actions + Rewards
|
||||||
|
# ============================
|
||||||
|
class ActionLoggingCallback(BaseCallback):
|
||||||
|
"""
|
||||||
|
Logs distribution of actions and average reward after each rollout.
|
||||||
|
For off-policy (DQN), "rollout" can be a bit different than on-policy,
|
||||||
|
but stable-baselines3 still calls `_on_rollout_end` periodically.
|
||||||
|
"""
|
||||||
|
def __init__(self, verbose=0):
|
||||||
|
super(ActionLoggingCallback, self).__init__(verbose)
|
||||||
|
self.action_buffer = []
|
||||||
|
self.reward_buffer = []
|
||||||
|
|
||||||
|
def _on_training_start(self):
|
||||||
|
self.action_buffer = []
|
||||||
|
self.reward_buffer = []
|
||||||
|
|
||||||
|
def _on_step(self):
|
||||||
|
action = self.locals.get('action', None)
|
||||||
|
reward = self.locals.get('reward', None)
|
||||||
|
if action is not None:
|
||||||
|
self.action_buffer.append(action)
|
||||||
|
if reward is not None:
|
||||||
|
self.reward_buffer.append(reward)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _on_rollout_end(self):
|
||||||
|
import numpy as np
|
||||||
|
actions = np.array(self.action_buffer)
|
||||||
|
rewards = np.array(self.reward_buffer)
|
||||||
|
if len(actions) > 0:
|
||||||
|
unique, counts = np.unique(actions, return_counts=True)
|
||||||
|
total = len(actions)
|
||||||
|
distr_str = []
|
||||||
|
for act, c in zip(unique, counts):
|
||||||
|
distr_str.append(f"Action {act}: {c} times ({100 * c / total:.2f}%)")
|
||||||
|
logging.info(" -- DQN Rollout End -- ")
|
||||||
|
logging.info(" " + ", ".join(distr_str))
|
||||||
|
logging.info(f" Avg Reward this rollout: {rewards.mean():.4f} (min={rewards.min():.4f}, max={rewards.max():.4f})")
|
||||||
|
self.action_buffer = []
|
||||||
|
self.reward_buffer = []
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Data Preprocessing with Controlled Parallelization
|
||||||
|
# ============================
|
||||||
|
def parallel_feature_engineering(row):
|
||||||
|
"""
|
||||||
|
Placeholder function for feature engineering. Modify as needed.
|
||||||
|
Args:
|
||||||
|
row (pd.Series): A row from the DataFrame.
|
||||||
|
Returns:
|
||||||
|
pd.Series: Processed row.
|
||||||
|
"""
|
||||||
|
# Implement any additional feature engineering here if necessary
|
||||||
|
return row
|
||||||
|
|
||||||
|
def feature_engineering_parallel(df, num_workers):
|
||||||
|
"""
|
||||||
|
Applies feature engineering in parallel using multiprocessing.
|
||||||
|
Args:
|
||||||
|
df (pd.DataFrame): DataFrame to process.
|
||||||
|
num_workers (int): Number of worker processes.
|
||||||
|
Returns:
|
||||||
|
pd.DataFrame: Processed DataFrame.
|
||||||
|
"""
|
||||||
|
logging.info(f"Starting parallel feature engineering with {num_workers} workers...")
|
||||||
|
with Pool(processes=num_workers) as pool:
|
||||||
|
processed_rows = pool.map(parallel_feature_engineering, [row for _, row in df.iterrows()])
|
||||||
|
df_processed = pd.DataFrame(processed_rows)
|
||||||
|
logging.info("Parallel feature engineering completed.")
|
||||||
|
return df_processed
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# LSTM Model Construction & Training (Including Optuna Tuning)
|
||||||
|
# ============================
|
||||||
|
def build_lstm(input_shape, hyperparams):
|
||||||
|
model = Sequential()
|
||||||
|
num_layers = hyperparams['num_lstm_layers']
|
||||||
|
units = hyperparams['lstm_units']
|
||||||
|
drop = hyperparams['dropout_rate']
|
||||||
|
for i in range(num_layers):
|
||||||
|
return_seqs = (i < num_layers - 1)
|
||||||
|
model.add(Bidirectional(
|
||||||
|
LSTM(units, return_sequences=return_seqs, kernel_regularizer=l2(1e-4)),
|
||||||
|
input_shape=input_shape if i == 0 else None
|
||||||
|
))
|
||||||
|
model.add(Dropout(drop))
|
||||||
|
model.add(Dense(1, activation='linear'))
|
||||||
|
|
||||||
|
opt_name = hyperparams['optimizer']
|
||||||
|
lr = hyperparams['learning_rate']
|
||||||
|
decay = hyperparams['decay']
|
||||||
|
if opt_name == 'Adam':
|
||||||
|
opt = Adam(learning_rate=lr, decay=decay)
|
||||||
|
elif opt_name == 'Nadam':
|
||||||
|
opt = Nadam(learning_rate=lr)
|
||||||
|
else:
|
||||||
|
opt = Adam(learning_rate=lr)
|
||||||
|
|
||||||
|
model.compile(loss=Huber(), optimizer=opt, metrics=['mae'])
|
||||||
|
return model
|
||||||
|
|
||||||
|
# NOTE: The following lstm_objective is now defined as an inner function in main,
|
||||||
|
# so that it can access X_train, y_train, X_val, y_val.
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Custom Gym Environment with LSTM Predictions
|
||||||
|
# ============================
|
||||||
|
class StockTradingEnvWithLSTM(gym.Env):
|
||||||
|
"""
|
||||||
|
A custom OpenAI Gym environment for stock trading that integrates LSTM model predictions.
|
||||||
|
Observation includes technical indicators, account information, and predicted next close price.
|
||||||
|
"""
|
||||||
|
metadata = {'render.modes': ['human']}
|
||||||
|
|
||||||
|
def __init__(self, df, feature_columns, lstm_model, scaler_features, scaler_target,
|
||||||
|
window_size=15, initial_balance=10000, transaction_cost=0.001):
|
||||||
|
super(StockTradingEnvWithLSTM, self).__init__()
|
||||||
|
self.df = df.reset_index(drop=True)
|
||||||
|
self.feature_columns = feature_columns
|
||||||
|
self.lstm_model = lstm_model
|
||||||
|
self.scaler_features = scaler_features
|
||||||
|
self.scaler_target = scaler_target
|
||||||
|
self.window_size = window_size
|
||||||
|
|
||||||
|
self.initial_balance = initial_balance
|
||||||
|
self.balance = initial_balance
|
||||||
|
self.net_worth = initial_balance
|
||||||
|
self.transaction_cost = transaction_cost
|
||||||
|
|
||||||
|
self.max_steps = len(df)
|
||||||
|
self.current_step = 0
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
|
||||||
|
# Raw array of features
|
||||||
|
self.raw_features = df[feature_columns].values
|
||||||
|
|
||||||
|
# Action space: 0=Sell, 1=Hold, 2=Buy
|
||||||
|
self.action_space = spaces.Discrete(3)
|
||||||
|
|
||||||
|
# Observation space: [technical indicators, balance, shares, cost_basis, predicted_next_close]
|
||||||
|
self.observation_space = spaces.Box(
|
||||||
|
low=0, high=1,
|
||||||
|
shape=(len(feature_columns) + 3 + 1,),
|
||||||
|
dtype=np.float32
|
||||||
|
)
|
||||||
|
# Forced lock for LSTM predictions
|
||||||
|
self.lstm_lock = threading.Lock()
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.balance = self.initial_balance
|
||||||
|
self.net_worth = self.initial_balance
|
||||||
|
self.current_step = 0
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
return self._get_obs()
|
||||||
|
|
||||||
|
def _get_obs(self):
|
||||||
|
row = self.raw_features[self.current_step]
|
||||||
|
row_max = np.max(row) if np.max(row) != 0 else 1.0
|
||||||
|
row_norm = row / row_max
|
||||||
|
|
||||||
|
# Account info
|
||||||
|
additional = np.array([
|
||||||
|
self.balance / self.initial_balance,
|
||||||
|
self.shares_held / 100.0, # Assuming max 100 shares for normalization
|
||||||
|
self.cost_basis / (self.initial_balance + 1e-9)
|
||||||
|
], dtype=np.float32)
|
||||||
|
|
||||||
|
# LSTM prediction
|
||||||
|
if self.current_step < self.window_size:
|
||||||
|
predicted_close = 0.0
|
||||||
|
else:
|
||||||
|
seq = self.raw_features[self.current_step - self.window_size: self.current_step]
|
||||||
|
seq_scaled = self.scaler_features.transform(seq)
|
||||||
|
seq_scaled = np.expand_dims(seq_scaled, axis=0) # shape (1, window_size, #features)
|
||||||
|
with self.lstm_lock:
|
||||||
|
pred_scaled = self.lstm_model.predict(seq_scaled, verbose=0).flatten()[0]
|
||||||
|
pred_scaled = np.clip(pred_scaled, 0, 1)
|
||||||
|
unscaled = self.scaler_target.inverse_transform([[pred_scaled]])[0, 0]
|
||||||
|
predicted_close = unscaled / 1000.0 # Adjust normalization as needed
|
||||||
|
|
||||||
|
obs = np.concatenate([row_norm, additional, [predicted_close]]).astype(np.float32)
|
||||||
|
return obs
|
||||||
|
|
||||||
|
def step(self, action):
|
||||||
|
prev_net_worth = self.net_worth
|
||||||
|
current_price = self.df.loc[self.current_step, 'Close']
|
||||||
|
|
||||||
|
if action == 2: # BUY
|
||||||
|
shares_bought = int(self.balance // current_price)
|
||||||
|
if shares_bought > 0:
|
||||||
|
cost = shares_bought * current_price
|
||||||
|
fee = cost * self.transaction_cost
|
||||||
|
self.balance -= (cost + fee)
|
||||||
|
old_shares = self.shares_held
|
||||||
|
self.shares_held += shares_bought
|
||||||
|
self.cost_basis = ((self.cost_basis * old_shares) + (shares_bought * current_price)) / self.shares_held
|
||||||
|
|
||||||
|
elif action == 0: # SELL
|
||||||
|
if self.shares_held > 0:
|
||||||
|
revenue = self.shares_held * current_price
|
||||||
|
fee = revenue * self.transaction_cost
|
||||||
|
self.balance += (revenue - fee)
|
||||||
|
self.shares_held = 0
|
||||||
|
self.cost_basis = 0
|
||||||
|
|
||||||
|
self.net_worth = self.balance + self.shares_held * current_price
|
||||||
|
self.current_step += 1
|
||||||
|
done = (self.current_step >= self.max_steps - 1)
|
||||||
|
reward = self.net_worth - self.initial_balance
|
||||||
|
obs = self._get_obs()
|
||||||
|
return obs, reward, done, {}
|
||||||
|
|
||||||
|
def render(self, mode='human'):
|
||||||
|
profit = self.net_worth - self.initial_balance
|
||||||
|
print(f"Step: {self.current_step}, Balance={self.balance:.2f}, Shares={self.shares_held}, NetWorth={self.net_worth:.2f}, Profit={profit:.2f}")
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# DQN Training & Evaluation Functions (Sequential Loop)
|
||||||
|
# ============================
|
||||||
|
def evaluate_dqn_networth(model, env, n_episodes=1):
|
||||||
|
"""
|
||||||
|
Evaluates the trained DQN model by simulating trading over a specified number of episodes.
|
||||||
|
Args:
|
||||||
|
model (DQN): Trained DQN model.
|
||||||
|
env (gym.Env): Trading environment instance.
|
||||||
|
n_episodes (int): Number of episodes to run for evaluation.
|
||||||
|
Returns:
|
||||||
|
float: Average final net worth across episodes.
|
||||||
|
"""
|
||||||
|
final_net_worths = []
|
||||||
|
for _ in range(n_episodes):
|
||||||
|
obs = env.reset()
|
||||||
|
done = False
|
||||||
|
while not done:
|
||||||
|
action, _ = model.predict(obs, deterministic=True)
|
||||||
|
obs, reward, done, info = env.step(action)
|
||||||
|
final_net_worths.append(env.net_worth)
|
||||||
|
return np.mean(final_net_worths)
|
||||||
|
|
||||||
|
def train_and_evaluate_dqn(hyperparams, env_params, total_timesteps, eval_episodes):
|
||||||
|
"""
|
||||||
|
Trains a single DQN agent on an environment (using the frozen LSTM) with given hyperparameters,
|
||||||
|
then evaluates its final net worth.
|
||||||
|
Args:
|
||||||
|
hyperparams (dict): Hyperparameters for the DQN model.
|
||||||
|
env_params (dict): Parameters to create the StockTradingEnvWithLSTM.
|
||||||
|
total_timesteps (int): Total timesteps for training.
|
||||||
|
eval_episodes (int): Number of episodes for evaluation.
|
||||||
|
Returns:
|
||||||
|
agent, final_net_worth
|
||||||
|
"""
|
||||||
|
env = StockTradingEnvWithLSTM(**env_params)
|
||||||
|
vec_env = DummyVecEnv([lambda: env])
|
||||||
|
with dqn_lock:
|
||||||
|
agent = DQN(
|
||||||
|
'MlpPolicy',
|
||||||
|
vec_env,
|
||||||
|
verbose=1,
|
||||||
|
learning_rate=hyperparams['lr'],
|
||||||
|
gamma=hyperparams['gamma'],
|
||||||
|
exploration_fraction=hyperparams['exploration_fraction'],
|
||||||
|
buffer_size=hyperparams['buffer_size'],
|
||||||
|
batch_size=hyperparams['batch_size'],
|
||||||
|
train_freq=4,
|
||||||
|
target_update_interval=1000
|
||||||
|
)
|
||||||
|
agent.learn(total_timesteps=total_timesteps, callback=ActionLoggingCallback(verbose=0))
|
||||||
|
final_net_worth = evaluate_dqn_networth(agent, env, n_episodes=eval_episodes)
|
||||||
|
return agent, final_net_worth
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# MAIN FUNCTION WITH ENHANCED OPTIMIZATIONS
|
||||||
|
# ============================
|
||||||
|
def main():
|
||||||
|
args = parse_arguments()
|
||||||
|
csv_path = args.csv_path
|
||||||
|
lstm_window_size = args.lstm_window_size
|
||||||
|
dqn_total_timesteps = args.dqn_total_timesteps
|
||||||
|
dqn_eval_episodes = args.dqn_eval_episodes
|
||||||
|
n_trials_lstm = args.n_trials_lstm
|
||||||
|
preprocess_workers = args.preprocess_workers
|
||||||
|
enable_resource_monitor = args.monitor_resources
|
||||||
|
|
||||||
|
# -----------------------------
|
||||||
|
# Setup Logging
|
||||||
|
# -----------------------------
|
||||||
|
logging.basicConfig(level=logging.INFO,
|
||||||
|
format='%(asctime)s - %(levelname)s - %(message)s',
|
||||||
|
handlers=[logging.FileHandler("LSTMDQN.log"), logging.StreamHandler(sys.stdout)])
|
||||||
|
|
||||||
|
# -----------------------------
|
||||||
|
# Resource Detection & Logging
|
||||||
|
# -----------------------------
|
||||||
|
cpu_stats = get_cpu_info()
|
||||||
|
gpu_stats = get_gpu_info()
|
||||||
|
|
||||||
|
logging.info("===== Resource Statistics =====")
|
||||||
|
logging.info(f"Physical CPU Cores: {cpu_stats['physical_cores']}")
|
||||||
|
logging.info(f"Logical CPU Cores: {cpu_stats['logical_cores']}")
|
||||||
|
logging.info(f"CPU Usage per Core: {cpu_stats['cpu_percent']}%")
|
||||||
|
if gpu_stats:
|
||||||
|
logging.info("GPU Statistics:")
|
||||||
|
for gpu in gpu_stats:
|
||||||
|
logging.info(f"GPU {gpu['id']} - {gpu['name']}: Load: {gpu['load']}%, Memory Used: {gpu['memory_used']}MB / {gpu['memory_total']}MB, Temperature: {gpu['temperature']}°C")
|
||||||
|
else:
|
||||||
|
logging.info("No GPUs detected.")
|
||||||
|
logging.info("=================================")
|
||||||
|
|
||||||
|
# -----------------------------
|
||||||
|
# Configure TensorFlow
|
||||||
|
# -----------------------------
|
||||||
|
configure_tensorflow(cpu_stats, gpu_stats)
|
||||||
|
|
||||||
|
# -----------------------------
|
||||||
|
# Start Resource Monitoring (Optional)
|
||||||
|
# -----------------------------
|
||||||
|
if enable_resource_monitor:
|
||||||
|
logging.info("Starting real-time resource monitoring...")
|
||||||
|
resource_monitor_thread = threading.Thread(target=monitor_resources, args=(60,), daemon=True)
|
||||||
|
resource_monitor_thread.start()
|
||||||
|
|
||||||
|
##########################################
|
||||||
|
# A) LSTM PART: LOAD, PREPROCESS, TUNE
|
||||||
|
##########################################
|
||||||
|
# 1) LOAD & preprocess
|
||||||
|
df = load_data(csv_path)
|
||||||
|
df = calculate_technical_indicators(df)
|
||||||
|
|
||||||
|
feature_columns = [
|
||||||
|
'SMA_5','SMA_10','EMA_5','EMA_10','STDDEV_5',
|
||||||
|
'RSI','MACD','ADX','OBV','Volume','Open','High','Low',
|
||||||
|
'BB_Upper','BB_Lower','BB_Width','MFI'
|
||||||
|
]
|
||||||
|
target_column = 'Close'
|
||||||
|
df = df[['Date'] + feature_columns + [target_column]].dropna()
|
||||||
|
|
||||||
|
# 2) Controlled Parallel Data Preprocessing
|
||||||
|
if preprocess_workers is None:
|
||||||
|
preprocess_workers = max(1, cpu_stats['logical_cores'] - 2)
|
||||||
|
else:
|
||||||
|
preprocess_workers = min(preprocess_workers, cpu_stats['logical_cores'])
|
||||||
|
df = feature_engineering_parallel(df, num_workers=preprocess_workers)
|
||||||
|
|
||||||
|
scaler_features = MinMaxScaler()
|
||||||
|
scaler_target = MinMaxScaler()
|
||||||
|
|
||||||
|
X_all = df[feature_columns].values
|
||||||
|
y_all = df[[target_column]].values
|
||||||
|
|
||||||
|
X_scaled = scaler_features.fit_transform(X_all)
|
||||||
|
y_scaled = scaler_target.fit_transform(y_all).flatten()
|
||||||
|
|
||||||
|
# 3) Create sequences for LSTM
|
||||||
|
def create_sequences(features, target, window_size):
|
||||||
|
X_seq, y_seq = [], []
|
||||||
|
for i in range(len(features) - window_size):
|
||||||
|
X_seq.append(features[i:i+window_size])
|
||||||
|
y_seq.append(target[i+window_size])
|
||||||
|
return np.array(X_seq), np.array(y_seq)
|
||||||
|
|
||||||
|
X, y = create_sequences(X_scaled, y_scaled, lstm_window_size)
|
||||||
|
|
||||||
|
# 4) Split into train/val/test
|
||||||
|
train_size = int(len(X) * 0.7)
|
||||||
|
val_size = int(len(X) * 0.15)
|
||||||
|
test_size = len(X) - train_size - val_size
|
||||||
|
|
||||||
|
X_train, y_train = X[:train_size], y[:train_size]
|
||||||
|
X_val, y_val = X[train_size: train_size + val_size], y[train_size: train_size + val_size]
|
||||||
|
X_test, y_test = X[train_size + val_size:], y[train_size + val_size:]
|
||||||
|
|
||||||
|
logging.info(f"Scaled training features shape: {X_train.shape}")
|
||||||
|
logging.info(f"Scaled validation features shape: {X_val.shape}")
|
||||||
|
logging.info(f"Scaled testing features shape: {X_test.shape}")
|
||||||
|
logging.info(f"Scaled training target shape: {y_train.shape}")
|
||||||
|
logging.info(f"Scaled validation target shape: {y_val.shape}")
|
||||||
|
logging.info(f"Scaled testing target shape: {y_test.shape}")
|
||||||
|
|
||||||
|
# 5) Define the LSTM objective function here (so it has access to X_train, y_train, X_val, y_val)
|
||||||
|
def lstm_objective(trial):
|
||||||
|
num_lstm_layers = trial.suggest_int('num_lstm_layers', 1, 3)
|
||||||
|
lstm_units = trial.suggest_categorical('lstm_units', [32, 64, 96, 128])
|
||||||
|
dropout_rate = trial.suggest_float('dropout_rate', 0.1, 0.5)
|
||||||
|
learning_rate = trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True)
|
||||||
|
optimizer_name = trial.suggest_categorical('optimizer', ['Adam', 'Nadam'])
|
||||||
|
decay = trial.suggest_float('decay', 0.0, 1e-4)
|
||||||
|
|
||||||
|
hyperparams = {
|
||||||
|
'num_lstm_layers': num_lstm_layers,
|
||||||
|
'lstm_units': lstm_units,
|
||||||
|
'dropout_rate': dropout_rate,
|
||||||
|
'learning_rate': learning_rate,
|
||||||
|
'optimizer': optimizer_name,
|
||||||
|
'decay': decay
|
||||||
|
}
|
||||||
|
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
early_stop = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)
|
||||||
|
lr_reduce = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6)
|
||||||
|
cb_prune = KerasPruningCallback(trial, 'val_loss')
|
||||||
|
|
||||||
|
history = model_.fit(
|
||||||
|
X_train, y_train,
|
||||||
|
epochs=100,
|
||||||
|
batch_size=16,
|
||||||
|
validation_data=(X_val, y_val),
|
||||||
|
callbacks=[early_stop, lr_reduce, cb_prune],
|
||||||
|
verbose=0
|
||||||
|
)
|
||||||
|
val_mae = min(history.history['val_mae'])
|
||||||
|
return val_mae
|
||||||
|
|
||||||
|
# 6) Hyperparameter Optimization with Optuna for the LSTM
|
||||||
|
logging.info(f"Starting LSTM hyperparameter optimization with Optuna using {cpu_stats['logical_cores']-2} parallel trials...")
|
||||||
|
study_lstm = optuna.create_study(direction='minimize')
|
||||||
|
study_lstm.optimize(lstm_objective, n_trials=n_trials_lstm, n_jobs=cpu_stats['logical_cores']-2)
|
||||||
|
best_lstm_params = study_lstm.best_params
|
||||||
|
logging.info(f"Best LSTM Hyperparameters: {best_lstm_params}")
|
||||||
|
|
||||||
|
# 7) Train final LSTM with best hyperparameters
|
||||||
|
final_lstm = build_lstm((X_train.shape[1], X_train.shape[2]), best_lstm_params)
|
||||||
|
early_stop_final = EarlyStopping(monitor='val_loss', patience=20, restore_best_weights=True)
|
||||||
|
lr_reduce_final = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6)
|
||||||
|
logging.info("Training best LSTM model with optimized hyperparameters...")
|
||||||
|
hist = final_lstm.fit(
|
||||||
|
X_train, y_train,
|
||||||
|
epochs=300,
|
||||||
|
batch_size=16,
|
||||||
|
validation_data=(X_val, y_val),
|
||||||
|
callbacks=[early_stop_final, lr_reduce_final],
|
||||||
|
verbose=1
|
||||||
|
)
|
||||||
|
|
||||||
|
# 8) Evaluate LSTM
|
||||||
|
def evaluate_final_lstm(model, X_test, y_test):
|
||||||
|
logging.info("Evaluating final LSTM model...")
|
||||||
|
y_pred_scaled = model.predict(X_test).flatten()
|
||||||
|
y_pred_scaled = np.clip(y_pred_scaled, 0, 1)
|
||||||
|
y_pred = scaler_target.inverse_transform(y_pred_scaled.reshape(-1, 1)).flatten()
|
||||||
|
y_test_actual = scaler_target.inverse_transform(y_test.reshape(-1, 1)).flatten()
|
||||||
|
|
||||||
|
mse_ = mean_squared_error(y_test_actual, y_pred)
|
||||||
|
rmse_ = np.sqrt(mse_)
|
||||||
|
mae_ = mean_absolute_error(y_test_actual, y_pred)
|
||||||
|
r2_ = r2_score(y_test_actual, y_pred)
|
||||||
|
|
||||||
|
direction_actual = np.sign(np.diff(y_test_actual))
|
||||||
|
direction_pred = np.sign(np.diff(y_pred))
|
||||||
|
directional_accuracy = np.mean(direction_actual == direction_pred)
|
||||||
|
|
||||||
|
logging.info(f"Test MSE: {mse_:.4f}")
|
||||||
|
logging.info(f"Test RMSE: {rmse_:.4f}")
|
||||||
|
logging.info(f"Test MAE: {mae_:.4f}")
|
||||||
|
logging.info(f"Test R2 Score: {r2_:.4f}")
|
||||||
|
logging.info(f"Directional Accuracy: {directional_accuracy:.4f}")
|
||||||
|
|
||||||
|
plt.figure(figsize=(14, 7))
|
||||||
|
plt.plot(y_test_actual, label='Actual Price')
|
||||||
|
plt.plot(y_pred, label='Predicted Price')
|
||||||
|
plt.title('LSTM: Actual vs Predicted Closing Prices')
|
||||||
|
plt.legend()
|
||||||
|
plt.grid(True)
|
||||||
|
plt.savefig('lstm_actual_vs_pred.png')
|
||||||
|
plt.close()
|
||||||
|
|
||||||
|
table = []
|
||||||
|
limit = min(40, len(y_test_actual))
|
||||||
|
for i in range(limit):
|
||||||
|
table.append([i, round(y_test_actual[i], 2), round(y_pred[i], 2)])
|
||||||
|
headers = ["Index", "Actual Price", "Predicted Price"]
|
||||||
|
print("\nFirst 40 Actual vs. Predicted Prices:")
|
||||||
|
print(tabulate(table, headers=headers, tablefmt="pretty"))
|
||||||
|
return r2_, directional_accuracy
|
||||||
|
|
||||||
|
_r2, _diracc = evaluate_final_lstm(final_lstm, X_test, y_test)
|
||||||
|
|
||||||
|
# 9) Save final LSTM model and scalers
|
||||||
|
final_lstm.save('best_lstm_model.h5')
|
||||||
|
joblib.dump(scaler_features, 'scaler_features.pkl')
|
||||||
|
joblib.dump(scaler_target, 'scaler_target.pkl')
|
||||||
|
logging.info("Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).")
|
||||||
|
|
||||||
|
############################################################
|
||||||
|
# B) DQN PART: BUILD ENV THAT USES THE FROZEN LSTM + FORECAST
|
||||||
|
############################################################
|
||||||
|
# (StockTradingEnvWithLSTM is defined above)
|
||||||
|
|
||||||
|
###################################
|
||||||
|
# C) SEQUENTIAL DQN TRAINING WITH LSTM INTEGRATION
|
||||||
|
###################################
|
||||||
|
env_params = {
|
||||||
|
'df': df,
|
||||||
|
'feature_columns': feature_columns,
|
||||||
|
'lstm_model': final_lstm, # Use the frozen, best LSTM model
|
||||||
|
'scaler_features': scaler_features,
|
||||||
|
'scaler_target': scaler_target,
|
||||||
|
'window_size': lstm_window_size,
|
||||||
|
'initial_balance': 10000,
|
||||||
|
'transaction_cost': 0.001
|
||||||
|
}
|
||||||
|
|
||||||
|
# Base DQN hyperparameters (adjust as needed)
|
||||||
|
base_hyperparams = {
|
||||||
|
'lr': 1e-3,
|
||||||
|
'gamma': 0.95,
|
||||||
|
'exploration_fraction': 0.1,
|
||||||
|
'buffer_size': 10000,
|
||||||
|
'batch_size': 64
|
||||||
|
}
|
||||||
|
|
||||||
|
# Define performance threshold (final net worth must be above this)
|
||||||
|
PERFORMANCE_THRESHOLD = 10500.0
|
||||||
|
current_hyperparams = base_hyperparams.copy()
|
||||||
|
max_attempts = 10
|
||||||
|
best_agent = None
|
||||||
|
|
||||||
|
for attempt in range(max_attempts):
|
||||||
|
logging.info(f"Training DQN agent: Attempt {attempt+1} with hyperparameters: {current_hyperparams}")
|
||||||
|
agent, net_worth = train_and_evaluate_dqn(current_hyperparams, env_params,
|
||||||
|
total_timesteps=dqn_total_timesteps,
|
||||||
|
eval_episodes=dqn_eval_episodes)
|
||||||
|
logging.info(f"Agent achieved final net worth: ${net_worth:.2f}")
|
||||||
|
if net_worth >= PERFORMANCE_THRESHOLD:
|
||||||
|
logging.info("Agent meets performance criteria!")
|
||||||
|
best_agent = agent
|
||||||
|
best_agent.save("best_dqn_model_lstm.zip")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
logging.info("Performance below threshold. Adjusting hyperparameters and retrying...")
|
||||||
|
current_hyperparams['lr'] *= 0.9 # decrease learning rate by 10%
|
||||||
|
current_hyperparams['exploration_fraction'] = min(current_hyperparams['exploration_fraction'] + 0.02, 0.3)
|
||||||
|
|
||||||
|
if best_agent is None:
|
||||||
|
logging.warning("Failed to train a satisfactory DQN agent after multiple attempts. Using last trained model")
|
||||||
|
best_agent = agent
|
||||||
|
else:
|
||||||
|
logging.info("Final DQN agent trained and saved.")
|
||||||
|
|
||||||
|
###################################
|
||||||
|
# D) FINAL INFERENCE & LOG RESULTS
|
||||||
|
###################################
|
||||||
|
logging.info("Running final inference with the trained DQN model...")
|
||||||
|
|
||||||
|
env_test = StockTradingEnvWithLSTM(**env_params)
|
||||||
|
obs = env_test.reset()
|
||||||
|
done = False
|
||||||
|
total_reward = 0.0
|
||||||
|
step_data = []
|
||||||
|
step_count = 0
|
||||||
|
|
||||||
|
while not done:
|
||||||
|
step_count += 1
|
||||||
|
action, _ = best_agent.predict(obs, deterministic=True)
|
||||||
|
obs, reward, done, info = env_test.step(action)
|
||||||
|
total_reward += reward
|
||||||
|
step_data.append({
|
||||||
|
"Step": step_count,
|
||||||
|
"Action": int(action),
|
||||||
|
"Reward": reward,
|
||||||
|
"Balance": env_test.balance,
|
||||||
|
"Shares": env_test.shares_held,
|
||||||
|
"NetWorth": env_test.net_worth
|
||||||
|
})
|
||||||
|
|
||||||
|
final_net_worth = env_test.net_worth
|
||||||
|
final_profit = final_net_worth - env_test.initial_balance
|
||||||
|
|
||||||
|
print("\n=== Final DQN Inference ===")
|
||||||
|
print(f"Total Steps: {step_count}")
|
||||||
|
print(f"Final Net Worth: {final_net_worth:.2f}")
|
||||||
|
print(f"Final Profit: {final_profit:.2f}")
|
||||||
|
print(f"Sum of Rewards: {total_reward:.2f}")
|
||||||
|
|
||||||
|
buy_count = sum(1 for x in step_data if x["Action"] == 2)
|
||||||
|
sell_count = sum(1 for x in step_data if x["Action"] == 0)
|
||||||
|
hold_count = sum(1 for x in step_data if x["Action"] == 1)
|
||||||
|
print(f"Actions Taken -> BUY: {buy_count}, SELL: {sell_count}, HOLD: {hold_count}")
|
||||||
|
|
||||||
|
# Show last 15 steps
|
||||||
|
last_n = step_data[-15:] if len(step_data) > 15 else step_data
|
||||||
|
rows = []
|
||||||
|
for d in last_n:
|
||||||
|
rows.append([
|
||||||
|
d["Step"],
|
||||||
|
d["Action"],
|
||||||
|
f"{d['Reward']:.2f}",
|
||||||
|
f"{d['Balance']:.2f}",
|
||||||
|
d["Shares"],
|
||||||
|
f"{d['NetWorth']:.2f}"
|
||||||
|
])
|
||||||
|
headers = ["Step", "Action", "Reward", "Balance", "Shares", "NetWorth"]
|
||||||
|
print(f"\n== Last 15 Steps ==")
|
||||||
|
print(tabulate(rows, headers=headers, tablefmt="pretty"))
|
||||||
|
|
||||||
|
logging.info("Final inference completed. Results logged and displayed.")
|
||||||
|
|
||||||
|
###################################
|
||||||
|
# E) OPTIONAL: RETRY LOOP IF NET WORTH < THRESHOLD
|
||||||
|
###################################
|
||||||
|
if final_net_worth < PERFORMANCE_THRESHOLD:
|
||||||
|
logging.warning(f"Final net worth (${final_net_worth:.2f}) is below ${PERFORMANCE_THRESHOLD:.2f}. Retraining the same DQN model to learn from mistakes...")
|
||||||
|
|
||||||
|
additional_timesteps = 50000
|
||||||
|
logging.info(f"Retraining the existing DQN model for an additional {additional_timesteps} timesteps (keeping old experiences).")
|
||||||
|
best_agent.learn(
|
||||||
|
total_timesteps=additional_timesteps,
|
||||||
|
reset_num_timesteps=False, # Keep replay buffer + internal step counter
|
||||||
|
callback=ActionLoggingCallback(verbose=1)
|
||||||
|
)
|
||||||
|
|
||||||
|
obs = env_test.reset()
|
||||||
|
done = False
|
||||||
|
second_total_reward = 0.0
|
||||||
|
while not done:
|
||||||
|
action, _ = best_agent.predict(obs, deterministic=True)
|
||||||
|
obs, reward, done, info = env_test.step(action)
|
||||||
|
second_total_reward += reward
|
||||||
|
|
||||||
|
second_net_worth = env_test.net_worth
|
||||||
|
second_profit = second_net_worth - env_test.initial_balance
|
||||||
|
logging.info(f"After additional training, new final net worth=${second_net_worth:.2f}, profit=${second_profit:.2f}")
|
||||||
|
|
||||||
|
if second_net_worth < PERFORMANCE_THRESHOLD:
|
||||||
|
logging.warning("Even after continued training, net worth is still below threshold. Consider a deeper hyperparameter search or analyzing the environment settings.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
BIN
src/Machine-Learning/LSTM-python/src/best_dqn_model_lstm.zip
Normal file
BIN
src/Machine-Learning/LSTM-python/src/best_dqn_model_lstm.zip
Normal file
Binary file not shown.
Binary file not shown.
324
src/Machine-Learning/LSTM-python/src/file.txt
Normal file
324
src/Machine-Learning/LSTM-python/src/file.txt
Normal file
@@ -0,0 +1,324 @@
|
|||||||
|
(venv) kleinpanic@kleinpanic:~/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src$ py LSTMDQN.py BAT.csv
|
||||||
|
2025-01-31 00:33:45.402617: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
|
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
|
||||||
|
E0000 00:00:1738283625.423731 635164 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
|
E0000 00:00:1738283625.430264 635164 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
|
2025-01-31 00:33:45.451539: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
|
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
|
2025-01-31 00:33:51,246 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 00:33:51,246 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 00:33:51,246 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 00:33:51,246 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 00:33:51,246 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 00:33:51,247 - INFO - =================================
|
||||||
|
2025-01-31 00:33:51,247 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 00:33:51,247 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 00:33:52,623 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 00:33:52,640 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 00:33:52,640 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 00:33:52,680 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 00:33:52,690 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 00:34:02,440 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 00:34:02,527 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
[I 2025-01-31 00:34:02,528] A new study created in memory with name: no-name-30abc2af-0d5d-4afc-9e51-0e6ab5344277
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:487: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)
|
||||||
|
2025-01-31 00:34:02.545693: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/keras/src/layers/rnn/bidirectional.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
|
||||||
|
super().__init__(**kwargs)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/keras/src/optimizers/base_optimizer.py:86: UserWarning: Argument `decay` is no longer supported and will be ignored.
|
||||||
|
warnings.warn(
|
||||||
|
[I 2025-01-31 01:47:50,865] Trial 25 finished with value: 0.0044469027779996395 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.41552068050266755, 'learning_rate': 0.0020464230384217887, 'optimizer': 'Nadam', 'decay': 4.152362979808315e-05}. Best is trial 25 with value: 0.0044469027779996395.
|
||||||
|
[I 2025-01-31 01:51:53,458] Trial 1 finished with value: 0.004896007943898439 and parameters: {'num_lstm_layers': 2, 'lstm_units': 32, 'dropout_rate': 0.26347160232211786, 'learning_rate': 0.005618445438864423, 'optimizer': 'Adam', 'decay': 9.002232128681866e-05}. Best is trial 25 with value: 0.0044469027779996395.
|
||||||
|
[I 2025-01-31 01:52:07,955] Trial 13 finished with value: 0.004379551392048597 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.1879612031755749, 'learning_rate': 0.00045486151574373985, 'optimizer': 'Adam', 'decay': 7.841076864183645e-05}. Best is trial 13 with value: 0.004379551392048597.
|
||||||
|
[I 2025-01-31 01:56:35,039] Trial 2 finished with value: 0.0035048779100179672 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18042015532719258, 'learning_rate': 0.008263668593877975, 'optimizer': 'Nadam', 'decay': 7.065697336348234e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
[I 2025-01-31 01:59:49,276] Trial 8 finished with value: 0.004185597877949476 and parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.1225129824590411, 'learning_rate': 0.0032993925521966573, 'optimizer': 'Adam', 'decay': 7.453500347854662e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
[I 2025-01-31 01:59:49,666] Trial 12 pruned. Trial was pruned at epoch 61.
|
||||||
|
[I 2025-01-31 01:59:57,670] Trial 7 pruned. Trial was pruned at epoch 60.
|
||||||
|
[I 2025-01-31 02:00:00,145] Trial 6 pruned. Trial was pruned at epoch 48.
|
||||||
|
[I 2025-01-31 02:00:02,845] Trial 0 pruned. Trial was pruned at epoch 53.
|
||||||
|
[I 2025-01-31 02:00:03,464] Trial 14 pruned. Trial was pruned at epoch 52.
|
||||||
|
[I 2025-01-31 02:00:08,618] Trial 20 pruned. Trial was pruned at epoch 41.
|
||||||
|
[I 2025-01-31 02:00:09,918] Trial 18 pruned. Trial was pruned at epoch 57.
|
||||||
|
[I 2025-01-31 02:00:18,111] Trial 11 pruned. Trial was pruned at epoch 48.
|
||||||
|
[I 2025-01-31 02:00:18,175] Trial 24 pruned. Trial was pruned at epoch 70.
|
||||||
|
[I 2025-01-31 02:00:24,035] Trial 19 pruned. Trial was pruned at epoch 71.
|
||||||
|
[I 2025-01-31 02:00:25,349] Trial 15 pruned. Trial was pruned at epoch 61.
|
||||||
|
[I 2025-01-31 02:00:28,094] Trial 21 pruned. Trial was pruned at epoch 53.
|
||||||
|
[I 2025-01-31 02:00:30,582] Trial 27 pruned. Trial was pruned at epoch 70.
|
||||||
|
[I 2025-01-31 02:00:34,584] Trial 16 pruned. Trial was pruned at epoch 54.
|
||||||
|
[I 2025-01-31 02:00:36,311] Trial 4 pruned. Trial was pruned at epoch 41.
|
||||||
|
[I 2025-01-31 02:00:36,943] Trial 10 pruned. Trial was pruned at epoch 58.
|
||||||
|
[I 2025-01-31 02:00:41,876] Trial 26 pruned. Trial was pruned at epoch 54.
|
||||||
|
[I 2025-01-31 02:00:42,253] Trial 5 pruned. Trial was pruned at epoch 54.
|
||||||
|
[I 2025-01-31 02:00:42,354] Trial 22 pruned. Trial was pruned at epoch 54.
|
||||||
|
[I 2025-01-31 02:01:21,394] Trial 17 pruned. Trial was pruned at epoch 63.
|
||||||
|
[I 2025-01-31 02:02:27,396] Trial 28 finished with value: 0.005718659609556198 and parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.256096112829434, 'learning_rate': 1.7863513392726302e-05, 'optimizer': 'Nadam', 'decay': 4.8981982638899195e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
[I 2025-01-31 02:04:43,158] Trial 9 finished with value: 0.004240941721946001 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.13786769624978978, 'learning_rate': 0.00038368722697235065, 'optimizer': 'Nadam', 'decay': 5.219728457137628e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
[I 2025-01-31 02:04:47,356] Trial 29 pruned. Trial was pruned at epoch 89.
|
||||||
|
[I 2025-01-31 02:04:58,802] Trial 23 finished with value: 0.004438518546521664 and parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.10170042323024542, 'learning_rate': 2.1295423006302236e-05, 'optimizer': 'Nadam', 'decay': 1.9256711241510017e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
[I 2025-01-31 02:07:22,581] Trial 3 finished with value: 0.004468627739697695 and parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.2941741845859971, 'learning_rate': 0.00015534552759452507, 'optimizer': 'Adam', 'decay': 3.964121547616277e-05}. Best is trial 2 with value: 0.0035048779100179672.
|
||||||
|
2025-01-31 02:07:22,583 - INFO - Best LSTM Hyperparameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.18042015532719258, 'learning_rate': 0.008263668593877975, 'optimizer': 'Nadam', 'decay': 7.065697336348234e-05}
|
||||||
|
2025-01-31 02:07:22,887 - INFO - Training best LSTM model with optimized hyperparameters...
|
||||||
|
Epoch 1/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 22s 21ms/step - loss: 0.0176 - mae: 0.0468 - val_loss: 3.7775e-04 - val_mae: 0.0096 - learning_rate: 0.0083
|
||||||
|
Epoch 2/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 5.1259e-04 - mae: 0.0169 - val_loss: 5.0930e-04 - val_mae: 0.0269 - learning_rate: 0.0083
|
||||||
|
Epoch 3/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.5887e-04 - mae: 0.0161 - val_loss: 1.2987e-04 - val_mae: 0.0054 - learning_rate: 0.0083
|
||||||
|
Epoch 4/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.4157e-04 - mae: 0.0157 - val_loss: 1.4855e-04 - val_mae: 0.0068 - learning_rate: 0.0083
|
||||||
|
Epoch 5/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.3388e-04 - mae: 0.0151 - val_loss: 1.2859e-04 - val_mae: 0.0064 - learning_rate: 0.0083
|
||||||
|
Epoch 6/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.3468e-04 - mae: 0.0153 - val_loss: 1.3908e-04 - val_mae: 0.0086 - learning_rate: 0.0083
|
||||||
|
Epoch 7/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 3.0007e-04 - mae: 0.0139 - val_loss: 1.4985e-04 - val_mae: 0.0053 - learning_rate: 0.0083
|
||||||
|
Epoch 8/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 2.8960e-04 - mae: 0.0145 - val_loss: 1.0344e-04 - val_mae: 0.0059 - learning_rate: 0.0083
|
||||||
|
Epoch 9/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8612e-04 - mae: 0.0107 - val_loss: 1.2100e-04 - val_mae: 0.0089 - learning_rate: 0.0041
|
||||||
|
Epoch 10/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8470e-04 - mae: 0.0113 - val_loss: 1.4217e-04 - val_mae: 0.0115 - learning_rate: 0.0041
|
||||||
|
Epoch 11/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8239e-04 - mae: 0.0113 - val_loss: 7.3773e-05 - val_mae: 0.0052 - learning_rate: 0.0041
|
||||||
|
Epoch 12/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.8133e-04 - mae: 0.0113 - val_loss: 8.1284e-05 - val_mae: 0.0063 - learning_rate: 0.0041
|
||||||
|
Epoch 13/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.6633e-04 - mae: 0.0107 - val_loss: 1.0878e-04 - val_mae: 0.0099 - learning_rate: 0.0041
|
||||||
|
Epoch 14/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2703e-04 - mae: 0.0095 - val_loss: 8.1036e-05 - val_mae: 0.0085 - learning_rate: 0.0021
|
||||||
|
Epoch 15/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2520e-04 - mae: 0.0093 - val_loss: 6.9320e-05 - val_mae: 0.0073 - learning_rate: 0.0021
|
||||||
|
Epoch 16/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.2067e-04 - mae: 0.0092 - val_loss: 5.2056e-05 - val_mae: 0.0046 - learning_rate: 0.0021
|
||||||
|
Epoch 17/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.1662e-04 - mae: 0.0092 - val_loss: 8.4469e-05 - val_mae: 0.0092 - learning_rate: 0.0021
|
||||||
|
Epoch 18/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 1.1402e-04 - mae: 0.0092 - val_loss: 5.3823e-05 - val_mae: 0.0040 - learning_rate: 0.0021
|
||||||
|
Epoch 19/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.8560e-05 - mae: 0.0083 - val_loss: 4.5592e-05 - val_mae: 0.0051 - learning_rate: 0.0010
|
||||||
|
Epoch 20/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.8293e-05 - mae: 0.0082 - val_loss: 4.5364e-05 - val_mae: 0.0049 - learning_rate: 0.0010
|
||||||
|
Epoch 21/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.5821e-05 - mae: 0.0083 - val_loss: 4.0955e-05 - val_mae: 0.0042 - learning_rate: 0.0010
|
||||||
|
Epoch 22/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.5071e-05 - mae: 0.0079 - val_loss: 3.6926e-05 - val_mae: 0.0038 - learning_rate: 0.0010
|
||||||
|
Epoch 23/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 9.3654e-05 - mae: 0.0081 - val_loss: 4.7498e-05 - val_mae: 0.0061 - learning_rate: 0.0010
|
||||||
|
Epoch 24/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.7295e-05 - mae: 0.0076 - val_loss: 3.5652e-05 - val_mae: 0.0039 - learning_rate: 5.1648e-04
|
||||||
|
Epoch 25/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.1205e-05 - mae: 0.0077 - val_loss: 3.5340e-05 - val_mae: 0.0040 - learning_rate: 5.1648e-04
|
||||||
|
Epoch 26/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.9519e-05 - mae: 0.0076 - val_loss: 3.3783e-05 - val_mae: 0.0038 - learning_rate: 5.1648e-04
|
||||||
|
Epoch 27/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.3218e-05 - mae: 0.0078 - val_loss: 3.3893e-05 - val_mae: 0.0039 - learning_rate: 5.1648e-04
|
||||||
|
Epoch 28/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.2856e-05 - mae: 0.0078 - val_loss: 3.7778e-05 - val_mae: 0.0045 - learning_rate: 5.1648e-04
|
||||||
|
Epoch 29/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.1744e-05 - mae: 0.0076 - val_loss: 3.1605e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
|
||||||
|
Epoch 30/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3165e-05 - mae: 0.0072 - val_loss: 3.1850e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
|
||||||
|
Epoch 31/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.4117e-05 - mae: 0.0073 - val_loss: 3.1598e-05 - val_mae: 0.0038 - learning_rate: 2.5824e-04
|
||||||
|
Epoch 32/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.8020e-05 - mae: 0.0076 - val_loss: 3.8364e-05 - val_mae: 0.0048 - learning_rate: 2.5824e-04
|
||||||
|
Epoch 33/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.2452e-05 - mae: 0.0073 - val_loss: 4.1319e-05 - val_mae: 0.0053 - learning_rate: 2.5824e-04
|
||||||
|
Epoch 34/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9039e-05 - mae: 0.0071 - val_loss: 3.2345e-05 - val_mae: 0.0041 - learning_rate: 1.2912e-04
|
||||||
|
Epoch 35/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0146e-05 - mae: 0.0072 - val_loss: 3.3009e-05 - val_mae: 0.0042 - learning_rate: 1.2912e-04
|
||||||
|
Epoch 36/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0245e-05 - mae: 0.0071 - val_loss: 3.1106e-05 - val_mae: 0.0041 - learning_rate: 1.2912e-04
|
||||||
|
Epoch 37/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0257e-05 - mae: 0.0072 - val_loss: 3.1513e-05 - val_mae: 0.0040 - learning_rate: 1.2912e-04
|
||||||
|
Epoch 38/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.8350e-05 - mae: 0.0070 - val_loss: 3.0209e-05 - val_mae: 0.0039 - learning_rate: 1.2912e-04
|
||||||
|
Epoch 39/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 8.3547e-05 - mae: 0.0072 - val_loss: 3.0854e-05 - val_mae: 0.0040 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 40/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.2400e-05 - mae: 0.0071 - val_loss: 2.9529e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 41/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4073e-05 - mae: 0.0069 - val_loss: 2.9258e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 42/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5838e-05 - mae: 0.0070 - val_loss: 2.9054e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 43/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0313e-05 - mae: 0.0070 - val_loss: 2.9163e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 44/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0101e-05 - mae: 0.0071 - val_loss: 2.8841e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 45/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.8816e-05 - mae: 0.0071 - val_loss: 2.8675e-05 - val_mae: 0.0037 - learning_rate: 6.4560e-05
|
||||||
|
Epoch 46/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4251e-05 - mae: 0.0069 - val_loss: 2.8767e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
|
||||||
|
Epoch 47/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3158e-05 - mae: 0.0069 - val_loss: 2.9648e-05 - val_mae: 0.0038 - learning_rate: 3.2280e-05
|
||||||
|
Epoch 48/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4270e-05 - mae: 0.0069 - val_loss: 2.8902e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
|
||||||
|
Epoch 49/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2356e-05 - mae: 0.0068 - val_loss: 2.9181e-05 - val_mae: 0.0038 - learning_rate: 3.2280e-05
|
||||||
|
Epoch 50/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6547e-05 - mae: 0.0069 - val_loss: 2.8695e-05 - val_mae: 0.0037 - learning_rate: 3.2280e-05
|
||||||
|
Epoch 51/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.0234e-05 - mae: 0.0067 - val_loss: 2.9130e-05 - val_mae: 0.0038 - learning_rate: 1.6140e-05
|
||||||
|
Epoch 52/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3895e-05 - mae: 0.0069 - val_loss: 2.8748e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
|
||||||
|
Epoch 53/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2657e-05 - mae: 0.0068 - val_loss: 2.9734e-05 - val_mae: 0.0039 - learning_rate: 1.6140e-05
|
||||||
|
Epoch 54/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9419e-05 - mae: 0.0068 - val_loss: 2.8744e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
|
||||||
|
Epoch 55/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.0539e-05 - mae: 0.0068 - val_loss: 2.8263e-05 - val_mae: 0.0037 - learning_rate: 1.6140e-05
|
||||||
|
Epoch 56/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0298e-05 - mae: 0.0068 - val_loss: 2.9675e-05 - val_mae: 0.0039 - learning_rate: 8.0700e-06
|
||||||
|
Epoch 57/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4799e-05 - mae: 0.0067 - val_loss: 2.9589e-05 - val_mae: 0.0039 - learning_rate: 8.0700e-06
|
||||||
|
Epoch 58/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.7056e-05 - mae: 0.0069 - val_loss: 2.8803e-05 - val_mae: 0.0037 - learning_rate: 8.0700e-06
|
||||||
|
Epoch 59/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.3120e-05 - mae: 0.0068 - val_loss: 2.9058e-05 - val_mae: 0.0038 - learning_rate: 8.0700e-06
|
||||||
|
Epoch 60/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5512e-05 - mae: 0.0069 - val_loss: 2.9056e-05 - val_mae: 0.0038 - learning_rate: 8.0700e-06
|
||||||
|
Epoch 61/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6107e-05 - mae: 0.0068 - val_loss: 2.9655e-05 - val_mae: 0.0039 - learning_rate: 4.0350e-06
|
||||||
|
Epoch 62/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.1988e-05 - mae: 0.0068 - val_loss: 2.9478e-05 - val_mae: 0.0039 - learning_rate: 4.0350e-06
|
||||||
|
Epoch 63/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.4365e-05 - mae: 0.0068 - val_loss: 2.9044e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
|
||||||
|
Epoch 64/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.8052e-05 - mae: 0.0068 - val_loss: 2.9246e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
|
||||||
|
Epoch 65/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2301e-05 - mae: 0.0068 - val_loss: 2.8845e-05 - val_mae: 0.0038 - learning_rate: 4.0350e-06
|
||||||
|
Epoch 66/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3675e-05 - mae: 0.0069 - val_loss: 2.9359e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
|
||||||
|
Epoch 67/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2341e-05 - mae: 0.0068 - val_loss: 2.8623e-05 - val_mae: 0.0037 - learning_rate: 2.0175e-06
|
||||||
|
Epoch 68/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3251e-05 - mae: 0.0069 - val_loss: 2.9224e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
|
||||||
|
Epoch 69/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.5100e-05 - mae: 0.0069 - val_loss: 2.8827e-05 - val_mae: 0.0038 - learning_rate: 2.0175e-06
|
||||||
|
Epoch 70/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6786e-05 - mae: 0.0068 - val_loss: 2.8537e-05 - val_mae: 0.0037 - learning_rate: 2.0175e-06
|
||||||
|
Epoch 71/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.9285e-05 - mae: 0.0069 - val_loss: 2.8691e-05 - val_mae: 0.0037 - learning_rate: 1.0087e-06
|
||||||
|
Epoch 72/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.2495e-05 - mae: 0.0068 - val_loss: 2.8928e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
|
||||||
|
Epoch 73/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.3063e-05 - mae: 0.0068 - val_loss: 2.8745e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
|
||||||
|
Epoch 74/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 7.0096e-05 - mae: 0.0069 - val_loss: 2.8655e-05 - val_mae: 0.0037 - learning_rate: 1.0087e-06
|
||||||
|
Epoch 75/300
|
||||||
|
884/884 ━━━━━━━━━━━━━━━━━━━━ 18s 20ms/step - loss: 6.6190e-05 - mae: 0.0069 - val_loss: 2.9064e-05 - val_mae: 0.0038 - learning_rate: 1.0087e-06
|
||||||
|
2025-01-31 02:29:45,755 - INFO - Evaluating final LSTM model...
|
||||||
|
95/95 ━━━━━━━━━━━━━━━━━━━━ 2s 13ms/step
|
||||||
|
2025-01-31 02:29:47,478 - INFO - Test MSE: 0.0765
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test RMSE: 0.2765
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test MAE: 0.1770
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Test R2 Score: 0.9937
|
||||||
|
2025-01-31 02:29:47,479 - INFO - Directional Accuracy: 0.4823
|
||||||
|
|
||||||
|
First 40 Actual vs. Predicted Prices:
|
||||||
|
+-------+--------------+-------------------+
|
||||||
|
| Index | Actual Price | Predicted Price |
|
||||||
|
+-------+--------------+-------------------+
|
||||||
|
| 0 | 65.26 | 64.37000274658203 |
|
||||||
|
| 1 | 65.12 | 64.76000213623047 |
|
||||||
|
| 2 | 65.32 | 64.98999786376953 |
|
||||||
|
| 3 | 65.29 | 65.0999984741211 |
|
||||||
|
| 4 | 65.26 | 65.04000091552734 |
|
||||||
|
| 5 | 65.29 | 65.16000366210938 |
|
||||||
|
| 6 | 65.26 | 65.19999694824219 |
|
||||||
|
| 7 | 65.48 | 65.06999969482422 |
|
||||||
|
| 8 | 65.29 | 65.08999633789062 |
|
||||||
|
| 9 | 65.25 | 65.04000091552734 |
|
||||||
|
| 10 | 65.35 | 65.0999984741211 |
|
||||||
|
| 11 | 65.14 | 65.05000305175781 |
|
||||||
|
| 12 | 65.2 | 65.0199966430664 |
|
||||||
|
| 13 | 65.21 | 65.01000213623047 |
|
||||||
|
| 14 | 65.1 | 64.94000244140625 |
|
||||||
|
| 15 | 65.45 | 64.87000274658203 |
|
||||||
|
| 16 | 65.26 | 65.13999938964844 |
|
||||||
|
| 17 | 65.24 | 65.08999633789062 |
|
||||||
|
| 18 | 65.43 | 65.12000274658203 |
|
||||||
|
| 19 | 65.22 | 65.18000030517578 |
|
||||||
|
| 20 | 65.34 | 65.16999816894531 |
|
||||||
|
| 21 | 65.13 | 65.20999908447266 |
|
||||||
|
| 22 | 65.05 | 65.01000213623047 |
|
||||||
|
| 23 | 64.94 | 65.05000305175781 |
|
||||||
|
| 24 | 64.94 | 64.91000366210938 |
|
||||||
|
| 25 | 64.85 | 64.83000183105469 |
|
||||||
|
| 26 | 64.98 | 64.83000183105469 |
|
||||||
|
| 27 | 64.93 | 64.80999755859375 |
|
||||||
|
| 28 | 64.86 | 64.80999755859375 |
|
||||||
|
| 29 | 64.71 | 64.81999969482422 |
|
||||||
|
| 30 | 64.89 | 64.56999969482422 |
|
||||||
|
| 31 | 64.89 | 64.7699966430664 |
|
||||||
|
| 32 | 64.97 | 64.83000183105469 |
|
||||||
|
| 33 | 65.03 | 64.79000091552734 |
|
||||||
|
| 34 | 64.99 | 64.95999908447266 |
|
||||||
|
| 35 | 64.95 | 64.8499984741211 |
|
||||||
|
| 36 | 64.89 | 64.88999938964844 |
|
||||||
|
| 37 | 64.87 | 64.8499984741211 |
|
||||||
|
| 38 | 64.72 | 64.87000274658203 |
|
||||||
|
| 39 | 64.63 | 64.70999908447266 |
|
||||||
|
+-------+--------------+-------------------+
|
||||||
|
2025-01-31 02:30:07,570 - WARNING - You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
|
||||||
|
2025-01-31 02:30:07,639 - INFO - Saved best LSTM model and scaler objects (best_lstm_model.h5, scaler_features.pkl, scaler_target.pkl).
|
||||||
|
2025-01-31 02:30:07,640 - INFO - Starting DQN hyperparameter tuning with Optuna using 54 parallel trials...
|
||||||
|
[I 2025-01-31 02:30:07,640] A new study created in memory with name: no-name-7f6e13ed-f0e1-4c91-bfa6-ff8fbfdd7d46
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:753: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
lr = trial.suggest_loguniform("lr", 1e-5, 1e-2)
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
|
||||||
|
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
RuntimeError: main thread is not in main loop
|
||||||
|
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
|
||||||
|
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
RuntimeError: main thread is not in main loop
|
||||||
|
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
|
||||||
|
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
RuntimeError: main thread is not in main loop
|
||||||
|
Exception ignored in: <function Variable.__del__ at 0x79927f66a8e0>
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 410, in __del__
|
||||||
|
if self._tk.getboolean(self._tk.call("info", "exists", self._name)):
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
RuntimeError: main thread is not in main loop
|
||||||
|
Exception ignored in: <function Image.__del__ at 0x79920e1fc9a0>
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 4082, in __del__
|
||||||
|
self.tk.call('image', 'delete', self.name)
|
||||||
|
RuntimeError: main thread is not in main loop
|
||||||
|
Tcl_AsyncDelete: async handler deleted by the wrong thread
|
||||||
|
zsh: IOT instruction (core dumped) python3 LSTMDQN.py BAT.csv
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 89 KiB After Width: | Height: | Size: 91 KiB |
24
src/Machine-Learning/LSTM-python/src/lt
Normal file
24
src/Machine-Learning/LSTM-python/src/lt
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
2025-02-01 15:31:11,610 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 15:31:11,611 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 15:31:11,611 - INFO - Training DQN agent: Attempt 9 with hyperparameters: {'lr': 0.00043046721, 'gamma': 0.95, 'exploration_fraction': 0.25999999999999995, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
Using cpu device
|
||||||
|
2025-02-01 17:06:19,852 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 17:06:19,853 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 17:06:19,853 - INFO - Training DQN agent: Attempt 10 with hyperparameters: {'lr': 0.000387420489, 'gamma': 0.95, 'exploration_fraction': 0.27999999999999997, 'buffer_size': 10000, 'batch_size': 64}
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/stable_baselines3/common/vec_env/patch_gym.py:49: UserWarning: You provided an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Stable-Baselines3 is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
|
||||||
|
warnings.warn(
|
||||||
|
Using cpu device
|
||||||
|
2025-02-01 18:41:36,874 - INFO - Agent achieved final net worth: $10000.00
|
||||||
|
2025-02-01 18:41:36,874 - INFO - Performance below threshold. Adjusting hyperparameters and retrying...
|
||||||
|
2025-02-01 18:41:36,875 - WARNING - Failed to train a satisfactory DQN agent after multiple attempts.
|
||||||
|
2025-02-01 18:41:36,875 - INFO - Running final inference with the trained DQN model...
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 869, in <module>
|
||||||
|
main()
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 795, in main
|
||||||
|
action, _ = best_agent.predict(obs, deterministic=True)
|
||||||
|
^^^^^^^^^^^^^^^^^^
|
||||||
|
AttributeError: 'NoneType' object has no attribute 'predict'
|
||||||
|
|
||||||
0
src/Machine-Learning/LSTM-python/src/output2.txt
Normal file
0
src/Machine-Learning/LSTM-python/src/output2.txt
Normal file
347
src/Machine-Learning/LSTM-python/src/t.t
Normal file
347
src/Machine-Learning/LSTM-python/src/t.t
Normal file
@@ -0,0 +1,347 @@
|
|||||||
|
(venv) kleinpanic@kleinpanic:~/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src$ py LSTMDQN.py BAT.csv
|
||||||
|
2025-01-31 22:41:37.524313: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
|
||||||
|
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
|
||||||
|
E0000 00:00:1738363297.545380 3148462 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
|
||||||
|
E0000 00:00:1738363297.551750 3148462 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
|
||||||
|
2025-01-31 22:41:37.573675: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
||||||
|
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
||||||
|
2025-01-31 22:41:43,005 - INFO - ===== Resource Statistics =====
|
||||||
|
2025-01-31 22:41:43,005 - INFO - Physical CPU Cores: 28
|
||||||
|
2025-01-31 22:41:43,005 - INFO - Logical CPU Cores: 56
|
||||||
|
2025-01-31 22:41:43,005 - INFO - CPU Usage per Core: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
|
||||||
|
2025-01-31 22:41:43,005 - INFO - No GPUs detected.
|
||||||
|
2025-01-31 22:41:43,005 - INFO - =================================
|
||||||
|
2025-01-31 22:41:43,006 - INFO - Configured TensorFlow to use CPU with optimized thread settings.
|
||||||
|
2025-01-31 22:41:43,006 - INFO - Loading data from: BAT.csv
|
||||||
|
2025-01-31 22:41:44,326 - INFO - Data columns after renaming: ['Date', 'Open', 'High', 'Low', 'Close', 'Volume']
|
||||||
|
2025-01-31 22:41:44,339 - INFO - Data loaded and sorted successfully.
|
||||||
|
2025-01-31 22:41:44,339 - INFO - Calculating technical indicators...
|
||||||
|
2025-01-31 22:41:44,370 - INFO - Technical indicators calculated successfully.
|
||||||
|
2025-01-31 22:41:44,379 - INFO - Starting parallel feature engineering with 54 workers...
|
||||||
|
2025-01-31 22:41:53,902 - INFO - Parallel feature engineering completed.
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled training features shape: (14134, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled validation features shape: (3028, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled testing features shape: (3030, 15, 17)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled training target shape: (14134,)
|
||||||
|
2025-01-31 22:41:54,028 - INFO - Scaled validation target shape: (3028,)
|
||||||
|
2025-01-31 22:41:54,029 - INFO - Scaled testing target shape: (3030,)
|
||||||
|
2025-01-31 22:41:54,029 - INFO - Starting LSTM hyperparameter optimization with Optuna using 54 parallel trials...
|
||||||
|
[I 2025-01-31 22:41:54,029] A new study created in memory with name: no-name-58aeb7f7-b8be-4643-9d01-0d7bcf35db2e
|
||||||
|
/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py:370: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use suggest_float(..., log=True) instead.
|
||||||
|
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)
|
||||||
|
[W 2025-01-31 22:41:54,037] Trial 0 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.3458004047482393, 'learning_rate': 0.00032571516657639116, 'optimizer': 'Adam', 'decay': 5.1271378208025266e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,040] Trial 1 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.41366725075244426, 'learning_rate': 1.4215518116455374e-05, 'optimizer': 'Adam', 'decay': 2.4425472693131955e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,041] Trial 0 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,044] Trial 2 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.4338960746358078, 'learning_rate': 0.0008904040106011442, 'optimizer': 'Nadam', 'decay': 5.346913345250019e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,045] Trial 1 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,048] Trial 3 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.12636442800548273, 'learning_rate': 0.00021216094172774624, 'optimizer': 'Adam', 'decay': 6.289573710217091e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,051] Trial 4 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.4118163224442708, 'learning_rate': 0.0001753425558060621, 'optimizer': 'Nadam', 'decay': 1.0106893106530013e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,053] Trial 5 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 96, 'dropout_rate': 0.22600776619683294, 'learning_rate': 4.6020052773101484e-05, 'optimizer': 'Nadam', 'decay': 1.401502701741485e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,054] Trial 2 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,059] Trial 6 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.49745444543788064, 'learning_rate': 0.004560559624417403, 'optimizer': 'Adam', 'decay': 9.80562105055051e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,060] Trial 3 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,064] Trial 7 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.11175568582439271, 'learning_rate': 0.000970072556392495, 'optimizer': 'Adam', 'decay': 5.792236253956584e-06} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,065] Trial 4 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,069] Trial 8 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.4128314285072633, 'learning_rate': 0.000545928656752339, 'optimizer': 'Adam', 'decay': 8.349182110406793e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,095] Trial 8 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,073] Trial 5 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,076] Trial 10 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.312090359026424, 'learning_rate': 0.004334434878981849, 'optimizer': 'Nadam', 'decay': 8.946685227991797e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,078] Trial 11 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.3176109191721788, 'learning_rate': 0.0010138486071155559, 'optimizer': 'Nadam', 'decay': 2.864596673239629e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,084] Trial 6 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,084] Trial 12 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.23624224169024638, 'learning_rate': 0.0007065434808473306, 'optimizer': 'Adam', 'decay': 1.6045047417478787e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,088] Trial 7 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,072] Trial 9 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.32982534569008337, 'learning_rate': 0.00044815992336546054, 'optimizer': 'Nadam', 'decay': 1.2045464023339681e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,097] Trial 10 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,101] Trial 11 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,104] Trial 12 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,108] Trial 9 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,126] Trial 13 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.4314674109696518, 'learning_rate': 0.00020500811974021594, 'optimizer': 'Nadam', 'decay': 9.329438318207097e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,126] Trial 13 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,137] Trial 14 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.45933740233556053, 'learning_rate': 0.0016981825407295947, 'optimizer': 'Nadam', 'decay': 3.7526439477629106e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,138] Trial 14 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,139] Trial 15 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.13179726561423677, 'learning_rate': 0.009702870830616994, 'optimizer': 'Nadam', 'decay': 1.5717160470745384e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,140] Trial 15 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,142] Trial 16 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.1184952725205303, 'learning_rate': 0.0002901212127436873, 'optimizer': 'Adam', 'decay': 1.2671796687995818e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,143] Trial 16 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,145] Trial 17 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.3911357548507932, 'learning_rate': 2.1174519659994443e-05, 'optimizer': 'Adam', 'decay': 7.113124525281298e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,146] Trial 18 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 128, 'dropout_rate': 0.194308829860494, 'learning_rate': 2.3684641389781485e-05, 'optimizer': 'Nadam', 'decay': 2.1823222065039084e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,146] Trial 17 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,147] Trial 19 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.34952903992289974, 'learning_rate': 0.0001649975428188158, 'optimizer': 'Nadam', 'decay': 8.961070238582916e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,148] Trial 18 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,150] Trial 19 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,151] Trial 20 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 32, 'dropout_rate': 0.24862299600787863, 'learning_rate': 3.160302043940613e-05, 'optimizer': 'Nadam', 'decay': 4.432627646713297e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,152] Trial 22 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.24247452680935244, 'learning_rate': 0.009143026717679506, 'optimizer': 'Nadam', 'decay': 3.8695560131185495e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,154] Trial 23 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 96, 'dropout_rate': 0.27974565379013505, 'learning_rate': 0.0005552121580002416, 'optimizer': 'Adam', 'decay': 6.460942114176827e-06} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,155] Trial 20 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,155] Trial 21 failed with parameters: {'num_lstm_layers': 3, 'lstm_units': 64, 'dropout_rate': 0.31566223075768207, 'learning_rate': 0.00013277190404539305, 'optimizer': 'Nadam', 'decay': 5.448184988496794e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,156] Trial 22 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,157] Trial 24 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 64, 'dropout_rate': 0.20684570701871122, 'learning_rate': 2.02919005955524e-05, 'optimizer': 'Nadam', 'decay': 6.367297091468678e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,158] Trial 23 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,158] Trial 25 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.14749229469818195, 'learning_rate': 1.6074589705354466e-05, 'optimizer': 'Nadam', 'decay': 2.9293835054420393e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,161] Trial 26 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 128, 'dropout_rate': 0.38879633341946584, 'learning_rate': 2.5036537142341482e-05, 'optimizer': 'Nadam', 'decay': 4.8346386929100394e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,161] Trial 21 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,161] Trial 27 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.4311830196294676, 'learning_rate': 6.15743775325322e-05, 'optimizer': 'Adam', 'decay': 2.5290071255921133e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,162] Trial 28 failed with parameters: {'num_lstm_layers': 1, 'lstm_units': 32, 'dropout_rate': 0.14813081091496075, 'learning_rate': 0.0017948222377220397, 'optimizer': 'Adam', 'decay': 9.679895886200194e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,163] Trial 24 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,164] Trial 29 failed with parameters: {'num_lstm_layers': 2, 'lstm_units': 64, 'dropout_rate': 0.4827525644514289, 'learning_rate': 0.000583829520138558, 'optimizer': 'Adam', 'decay': 3.9540551700479366e-05} because of the following error: NameError("name 'X_train' is not defined").
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/_optimize.py", line 197, in _run_trial
|
||||||
|
value_or_values = func(trial)
|
||||||
|
^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 383, in lstm_objective
|
||||||
|
model_ = build_lstm((X_train.shape[1], X_train.shape[2]), hyperparams)
|
||||||
|
^^^^^^^
|
||||||
|
NameError: name 'X_train' is not defined
|
||||||
|
[W 2025-01-31 22:41:54,165] Trial 25 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,166] Trial 26 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,167] Trial 27 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,168] Trial 28 failed with value None.
|
||||||
|
[W 2025-01-31 22:41:54,169] Trial 29 failed with value None.
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 897, in <module>
|
||||||
|
main()
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/LSTMDQN.py", line 685, in main
|
||||||
|
best_lstm_params = study_lstm.best_params
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/study.py", line 119, in best_params
|
||||||
|
return self.best_trial.params
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/study/study.py", line 162, in best_trial
|
||||||
|
best_trial = self._storage.get_best_trial(self._study_id)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/kleinpanic/git-clones/MidasTechnologies/src/Machine-Learning/LSTM-python/src/venv/lib/python3.11/site-packages/optuna/storages/_in_memory.py", line 249, in get_best_trial
|
||||||
|
raise ValueError("No trials are completed yet.")
|
||||||
|
ValueError: No trials are completed yet.
|
||||||
|
|
||||||
Reference in New Issue
Block a user