Week 9
I spent the majority of my time this week debugging, doing more analysis on my small swarm, and continuing to try to improve my neural networks using the small swarm as training data (shown below).
Modification 17
- Keeping the original channel numbers, I used an early stopping patience of 5
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 40/1000 with a training loss of 0.0170 and a validation loss of 0.0328
- When tested on the unseen cube of data, this network had a loss of 0.621
Modification 18
- Keeping the original channel numbers, I used an early stopping patience of 25
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 50/1000 with a training loss of 0.0123 and a validation loss of 0.0365
- When tested on the unseen cube of data, this network had a loss of 0.556
Modification 19
- I doubled the original channel numbers, added batch normalization, and used an early stopping patience of 25
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 90/1000 with a training loss of 0.0046 and a validation loss of 0.0303
- When tested on the unseen cube of data, this network had a loss of 0.696
Modification 20
- I halved the original channel numbers and used an early stopping patience of 25
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 70/1000 with a training loss of 0.0163 and a validation loss of 0.0338
- When tested on the unseen cube of data, this network had a loss of 0.492
Modification 21
- I halved the original channel numbers, added batch normalization, and used an early stopping patience of 25
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 100/1000 with a training loss of 0.0151 and a validation loss of 0.0402
- When tested on the unseen cube of data, this network had a loss of 0.399
Modification 22
- Keeping the original channel numbers, I added batch normalization, and used an early stopping patience of 25
- The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
- The network stopped training at epoch 100/1000 with a training loss of 0.0098 and a validation loss of 0.0251
- When tested on the unseen cube of data, this network had a loss of 0.710
As I mentioned last week, I believe it’s also important to look at varying threshold values to determine if a bee is present, so I reran everything with a different threshold to compare the results. Compared to the original threshold of 0.5, the following plots, using a threshold of 0.2, appear to have fewer differences between the labels and the binary output. Having the lower threshold value allowed for the lighter areas in the UNet output to be counted as bees, resulting in more of the perimeters of the bees being included.
Modification 9:
Modification 10:
Modification 11:
Modification 12:
Modification 13:
Modification 14:
Modification 15:
Modification 16:
Modification 17:
Modification 18:
Modification 19:
Modification 20:
Modification 21:
Modification 22:
Alongside training these neural networks, I tried to apply them to a regular swarm for the first time. These swarms are ten times bigger than the small swarm I labeled, and I hope to eventually train a network that will be able to label one of these swarms well, though they are inherently more challenging to label; due to the nature of how the data is collected, the bees along the outer edges of the swarm tend to be elongated and sometimes blend together, making them hard to identify and remain separate. You can see below, that when I apply my networks to these swarms, the networks perform better in the center of the swarm than these outer areas.
Modification 1.27:
Modification 7:
Modification 9: