At Ichthion, we are glad to share more about our journey in Portoviejo — only this time from a different perspective: through computer vision. Once we deployed our plastic capture device, the Azure System, extensive amounts of waste of all forms were being removed from the river. With the mechanical device operating smoothly, we wanted to be sure we collected robust and accurate data about that waste being removed. Right now, we’re working on improving our initial data collection methods and building a proper dataset for our ultimate goal: an artificial intelligence (AI) object recognition algorithm. This post is all about our computer vision story.
Our computer vision journey began once we started to remove the plastic that is polluting the Portoviejo River. Our initial plan was to first classify the extracted plastics by their type to train an AI system. But, we didn’t know what we would find. At first glance, the waste we were pulling out of the river looked like random clusters of various items. But upon further inspection and continued extraction, we realized there were common items we were seeing repeatedly. With this insight we were able to create a basic classification system for most of the plastic items coming out of the river, which in turn would eventually allow our AI system to recognize items within those classifications.
With the classification system in place, we then needed to train the AI software. To accomplish this, we collected numerous images during the first few months of the project. But, to train AI software, we need more than just images — we also need to tell the software what to look at in those images. So, we tagged the plastic items in those images based on the categories in the classification system. The next step was to create an initial database to store the tagged images that would be used in the object recognition software.
With this initial database, we trained the open-source object recognition algorithm YOLO V4 using some of the most common items found in the river. The results have been quite interesting. The images above and below show an example of waste items that were identified by the trained object recognition algorithm. As you can see, the system recognizes different types of items on the ground and on the conveyor belt of the Azure System.
After training the AI system using the initial dataset, we found that we needed more information. The more images we have in our dataset, the more accurate and successful our AI system will be. To build out this improved dataset, we have been using a data augmentation process. In our data augmentation process we take an original image and transform it to create new, useful ones. We implemented this augmentation process as well as a data management system using scripts written in the flexible Python language.
Although we have amassed an initial dataset of tagged images, there is still a long way to go. To be successful, we will still need many more images. The final dataset must be built with careful consideration for all of the details presented in the images, such as the objects, the environment, and more. For this reason, we plan to use a very accurate and already established process as we improve our dataset that provides as much control as possible over the contents of the images.
As our computer vision journey continues, our next steps are to evaluate the preliminary results of the AI system, continue to improve the database, and keep training the software. Then, we will iterate these steps until our AI system is recognizing items consistently and accurately. Ultimately, our goal for computer vision is to improve local policies by providing sound data for evidence-based policymaking.