TEAMER: Experimental Validation and Analysis of Deep Reinforcement Learning Control for Wave Energy Converters
Through this TEAMER project, Michigan Technological University (MTU) collaborated with Oregon State University (OSU) to test the performance of a Deep Reinforcement Learning (DRL) control in the wave tank. Unlike model-based controls, DRL control is model-free and can directly maximize the performance of the Wave Energy Converter (WEC) in terms of power production, regardless of system complexity. While DRL control has demonstrated promising performance in previous studies, this project aimed to (1) evaluate the practical performance of DRL control and (2) identify the challenges and limitations associated with its practical implementation.
To investigate the real-world performance of DRL-based control, the controller was trained with the LUPA numerical model using MATLAB/Simulink Deep Learning Toolbox and implemented on the Laboratory Upgrade Point Absorber (LUPA) device developed by the facility at OSU. A series of regular and irregular wave tests were conducted to evaluate the power harvested by the DRL control across different wave conditions, using various observation state selections, and incorporating a reward function that includes a penalty on the PTO force.
The dataset consists of six main parts:
(1) the Post Access Report
(2) the test log containing the test ID, description, test data filename, wave data filename, wave condition, test notes for all conducted LUPA Testing Data
(3) the tank testing results as described in the DRL Test Log
(4) the model used for retraining the DRL control and associated results
(5) the model used for pre-training the DRL control and associated results
(6) the scripts used for processing the data
(7) A readme file to indicate the folder contents and structure within the resources "LUPA Pretraining Data.zip", "LUPA Retraining Data.zip", and "ScriptsForPostProcessing.zip"
This testing was funded by TEAMER RFTS 10 (request for technical support) program.
Complete Metadata
| @type | dcat:Dataset |
|---|---|
| accessLevel | public |
| bureauCode |
[
"019:20"
]
|
| contactPoint |
{
"fn": "Shangyan Zou",
"@type": "vcard:Contact",
"hasEmail": "mailto:shangyan@mtu.edu"
}
|
| dataQuality |
true
|
| description | Through this TEAMER project, Michigan Technological University (MTU) collaborated with Oregon State University (OSU) to test the performance of a Deep Reinforcement Learning (DRL) control in the wave tank. Unlike model-based controls, DRL control is model-free and can directly maximize the performance of the Wave Energy Converter (WEC) in terms of power production, regardless of system complexity. While DRL control has demonstrated promising performance in previous studies, this project aimed to (1) evaluate the practical performance of DRL control and (2) identify the challenges and limitations associated with its practical implementation. To investigate the real-world performance of DRL-based control, the controller was trained with the LUPA numerical model using MATLAB/Simulink Deep Learning Toolbox and implemented on the Laboratory Upgrade Point Absorber (LUPA) device developed by the facility at OSU. A series of regular and irregular wave tests were conducted to evaluate the power harvested by the DRL control across different wave conditions, using various observation state selections, and incorporating a reward function that includes a penalty on the PTO force. The dataset consists of six main parts: (1) the Post Access Report (2) the test log containing the test ID, description, test data filename, wave data filename, wave condition, test notes for all conducted LUPA Testing Data (3) the tank testing results as described in the DRL Test Log (4) the model used for retraining the DRL control and associated results (5) the model used for pre-training the DRL control and associated results (6) the scripts used for processing the data (7) A readme file to indicate the folder contents and structure within the resources "LUPA Pretraining Data.zip", "LUPA Retraining Data.zip", and "ScriptsForPostProcessing.zip" This testing was funded by TEAMER RFTS 10 (request for technical support) program. |
| distribution |
[
{
"@type": "dcat:Distribution",
"title": "DRL Test Log.xlsx",
"format": "xlsx",
"accessURL": "https://mhkdr.openei.org/files/628/DRLTestLog.xlsx",
"mediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"description": "The test log that records all the tests conducted in the wave tank and corresponds to the data and files saved within the "LUPA Testing Data.zip" file"
},
{
"@type": "dcat:Distribution",
"title": "LUPA Pretraining Data.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/628/LUPA%20Pretraining%20Data.zip",
"mediaType": "application/zip",
"description": "Model used for pretraining of the DRL control and the associated results"
},
{
"@type": "dcat:Distribution",
"title": "LUPA Retraining Data.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/628/LUPA%20Retraining%20Data.zip",
"mediaType": "application/zip",
"description": "Model used for DRL control training under regular, irregular, and penalized wave tests. The best agents that are trained using these models are also included. Please refer to the provided Readme .txt file for more details."
},
{
"@type": "dcat:Distribution",
"title": "LUPA Testing Data.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/628/LUPA%20Testing%20Data.zip",
"mediaType": "application/zip",
"description": "LUPA testing results saved during the tank access.
Description and naming of each LUPA test can be found within the "DRL Test Log" file"
},
{
"@type": "dcat:Distribution",
"title": "Readme.txt",
"format": "txt",
"accessURL": "https://mhkdr.openei.org/files/628/Readme%20%282%29.txt",
"mediaType": "text/plain",
"description": "A read me file that explains the data saved under "LUPA Pretraining Data.zip", "LUPA Retraining Data.zip", and "ScriptsForPostProcessing.zip""
},
{
"@type": "dcat:Distribution",
"title": "ScriptsForPostProcessing.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/628/ScriptsForPostProcessing%20%281%29.zip",
"mediaType": "application/zip",
"description": "This file contains the source code used to postprocess the testing results"
},
{
"@type": "dcat:Distribution",
"title": "Post Access Report.docx",
"format": "docx",
"accessURL": "https://mhkdr.openei.org/files/628/TEAMER-Test-Plan-MTU-OSU_FullReport_ver2%20%281%29.docx",
"mediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"description": "TEAMER Post Access Report"
}
]
|
| identifier | https://data.openei.org/submissions/8436 |
| issued | 2025-03-07T07:00:00Z |
| keyword |
[
"DRL",
"DRL control",
"Deep Reinforcement Learning",
"LUPA",
"Laboratory Upgrade Point Absorber",
"MHK",
"Marine",
"PTO control",
"RFTS10",
"TEAMER",
"WEC",
"Wave Energy",
"Wave Energy Converter",
"code",
"irregular wave",
"performance",
"pertaining data",
"processed data",
"regular wave",
"retraining data",
"source code",
"validation",
"wave tank"
]
|
| landingPage | https://mhkdr.openei.org/submissions/628 |
| license | https://creativecommons.org/licenses/by/4.0/ |
| modified | 2025-06-16T17:54:44Z |
| programCode |
[
"019:009"
]
|
| projectLead | Lauren Ruedy |
| projectNumber | EE0008895 |
| projectTitle | Testing Expertise and Access for Marine Energy Research |
| publisher |
{
"name": "Michigan Technological University",
"@type": "org:Organization"
}
|
| spatial |
"{"type":"Polygon","coordinates":[[[-180,-83],[180,-83],[180,83],[-180,83],[-180,-83]]]}"
|
| title | TEAMER: Experimental Validation and Analysis of Deep Reinforcement Learning Control for Wave Energy Converters |