Return to search results
Fish Detection AI, Optic and Sonar-trained Object Detection Models
The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named "Yolo_Object_Detection_How_To_Document.docx")
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)
Complete Metadata
| @type | dcat:Dataset |
|---|---|
| accessLevel | public |
| bureauCode |
[
"019:20"
]
|
| contactPoint |
{
"fn": "Victoria Sabo",
"@type": "vcard:Contact",
"hasEmail": "mailto:sabo_victoria@bah.com"
}
|
| dataQuality |
true
|
| description | The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy. A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection. In this research we achieved: 1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability. This submission of data includes: - The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file - Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named "Yolo_Object_Detection_How_To_Document.docx") - Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment - Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data - link to the article that we were replicating (Xu, Matzner 2018) - link to the Yolo documentation site from the original creators of that model (ultralytics) - link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document) |
| distribution |
[
{
"@type": "dcat:Distribution",
"title": "Underwater Fish Detection Article by Xu and Matzner 2018",
"format": "01494",
"accessURL": "https://arxiv.org/pdf/1811.01494",
"mediaType": "application/octet-stream",
"description": "The article that was used to compare the results from our experimentation, namely an article by Xu and Matzner from 2018, titled Underwater Fish Detection using Deep Learning for Water Power Applications."
},
{
"@type": "dcat:Distribution",
"title": "Pacific Northwest National Laboratory EyeSea Fish Optic Images and Labels",
"format": "HTML",
"accessURL": "https://data.pnnl.gov/group/nodes/dataset/12978",
"mediaType": "text/html",
"description": "PNNL website for the specific EyeSea dataset, with a button to click to download the dataset to a local computer. This data is large, 80GB, and contains both labels and images broken down into training and testing folders and subfolders based on source (i.e., ORPC Igiugig, Voith Hydro, Wells Dam). The date of original creation also varies based on image source, with dates from 6/25/2014, 7/5/2014, 7/19/2015, 7/22/2015, and 6/27/2017"
},
{
"@type": "dcat:Distribution",
"title": "Ultralytics Public Documentation Website on YOLO Model Version 8",
"format": "HTML",
"accessURL": "https://docs.ultralytics.com/models/yolov8/",
"mediaType": "text/html",
"description": "The ultralytics website of documentation for Yolov8, which can be accessed by the public and has downloadable versions of several model version numbers, sizes, and datasets."
},
{
"@type": "dcat:Distribution",
"title": "Caltech Fish Counting Domain Adaptive Object Detection CFC-DAOD Dataset",
"format": "HTML",
"accessURL": "https://github.com/visipedia/caltech-fish-counting/tree/main/CFC-DAOD",
"mediaType": "text/html",
"description": "Caltech Fish Counting Domain Adaptive Object Detection (CFC-DAOD) dataset which is available via their GitHub"
},
{
"@type": "dcat:Distribution",
"title": "Yolo_Object_Detection_How_To_Document.docx",
"format": "docx",
"accessURL": "https://mhkdr.openei.org/files/600/20250310_Yolo_object_detection_how_tos_for_client_upload.docx",
"mediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"description": "The How To document that details what is fully in the upload, what each file does, how to understand the data, what to run to get certain results, and much more."
},
{
"@type": "dcat:Distribution",
"title": "Experiment1_Resources.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/Exp1_uploads_for_client.zip",
"mediaType": "application/zip",
"description": "The files (shell, python scripts) that can be run to complete Experiment 1 of the project, namely the object detection done using an out-of-the-box YOLO model not trained on any fish-specific items"
},
{
"@type": "dcat:Distribution",
"title": "Experiment2_Resources.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/Exp2_uploads_for_clients.zip",
"mediaType": "application/zip",
"description": "The files (shell, python scripts) that can be run to complete Experiment 2 of the project, namely the training and parameter experimentation for YOLO object detection algorithms done on optic images"
},
{
"@type": "dcat:Distribution",
"title": "Experiment3_and_Experiment4_Resources.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/Exp3_and_Exp4_client_deliverable.zip",
"mediaType": "application/zip",
"description": "The files (shell, python scripts) that can be run to complete Experiments 3 and 4 of the project, namely the pre-processing experimentation, and the single-source trained models"
},
{
"@type": "dcat:Distribution",
"title": "Experiment5_Resources.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/Exp5_client_deliverables.zip",
"mediaType": "application/zip",
"description": "The files (shell, python scripts) that can be run to complete Experiment 5 of the project, namely the object detection done on sonar images"
},
{
"@type": "dcat:Distribution",
"title": "Yolo_models_downloaded.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/Yolo_models_downloaded.zip",
"mediaType": "application/zip",
"description": "All the PyTorch (.pt) YOLO model weights, namely the trained best models from our experimentation, or downloaded models from the Ultralytics website which were the bases for our training"
},
{
"@type": "dcat:Distribution",
"title": "Sample1_data_structure.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/data_div_subfolders_sample1.zip",
"mediaType": "application/zip",
"description": "Example file directory structure for the PNNL EyeSea images, which should take effect after the user runs the data processing scripts after they have downloaded the raw data"
},
{
"@type": "dcat:Distribution",
"title": "Sample2_data_structure.zip",
"format": "zip",
"accessURL": "https://mhkdr.openei.org/files/600/data_div_subfolders_sample2.zip",
"mediaType": "application/zip",
"description": "Example file directory structure for the PNNL EyeSea images, which should take effect after the user runs the data processing scripts after they have downloaded the raw data"
}
]
|
| identifier | https://data.openei.org/submissions/8419 |
| issued | 2014-06-25T06:00:00Z |
| keyword |
[
"AI",
"EyeSea dataset",
"Eyesea",
"Eyesea optical dataset",
"Fish Detection AI",
"Hydrokinetic",
"MHK",
"Marine",
"PyTorch",
"PyTorch code",
"Python",
"Shell code",
"Sonar-trained Object Detection Models",
"YOLO model",
"YOLO performance",
"YOLO version 8",
"YOLOv8",
"Yaml code",
"code",
"cross-domain adaptability",
"energy",
"hyper-image approach",
"neural networks",
"object detection",
"power",
"small fish detection",
"you only look once model"
]
|
| landingPage | https://mhkdr.openei.org/submissions/600 |
| license | https://creativecommons.org/licenses/by/4.0/ |
| modified | 2025-05-21T15:42:18Z |
| programCode |
[
"019:009"
]
|
| projectLead | Samantha Eaves |
| projectNumber |
"32326"
|
| projectTitle | Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), Water Power Technologies Office (WPTO) |
| publisher |
{
"name": "Water Power Technology Office",
"@type": "org:Organization"
}
|
| spatial |
"{"type":"Polygon","coordinates":[[[-180,-83],[180,-83],[180,83],[-180,83],[-180,-83]]]}"
|
| title | Fish Detection AI, Optic and Sonar-trained Object Detection Models |