Week 9

I spent the majority of my time this week debugging, doing more analysis on my small swarm, and continuing to try to improve my neural networks using the small swarm as training data (shown below).

Modification 17

  • Keeping the original channel numbers, I used an early stopping patience of 5
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 40/1000 with a training loss of 0.0170 and a validation loss of 0.0328
  • When tested on the unseen cube of data, this network had a loss of 0.621 image image

Modification 18

  • Keeping the original channel numbers, I used an early stopping patience of 25
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 50/1000 with a training loss of 0.0123 and a validation loss of 0.0365
  • When tested on the unseen cube of data, this network had a loss of 0.556 image image

Modification 19

  • I doubled the original channel numbers, added batch normalization, and used an early stopping patience of 25
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 90/1000 with a training loss of 0.0046 and a validation loss of 0.0303
  • When tested on the unseen cube of data, this network had a loss of 0.696 image image

Modification 20

  • I halved the original channel numbers and used an early stopping patience of 25
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 70/1000 with a training loss of 0.0163 and a validation loss of 0.0338
  • When tested on the unseen cube of data, this network had a loss of 0.492 image image

Modification 21

  • I halved the original channel numbers, added batch normalization, and used an early stopping patience of 25
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 100/1000 with a training loss of 0.0151 and a validation loss of 0.0402
  • When tested on the unseen cube of data, this network had a loss of 0.399 image

Modification 22

  • Keeping the original channel numbers, I added batch normalization, and used an early stopping patience of 25
  • The small swarm was divided into 605 cubes with 13 voxels overlap to use for training data
  • The network stopped training at epoch 100/1000 with a training loss of 0.0098 and a validation loss of 0.0251
  • When tested on the unseen cube of data, this network had a loss of 0.710 image

As I mentioned last week, I believe it’s also important to look at varying threshold values to determine if a bee is present, so I reran everything with a different threshold to compare the results. Compared to the original threshold of 0.5, the following plots, using a threshold of 0.2, appear to have fewer differences between the labels and the binary output. Having the lower threshold value allowed for the lighter areas in the UNet output to be counted as bees, resulting in more of the perimeters of the bees being included.

Modification 9: image image

Modification 10: image image

Modification 11: image image

Modification 12: image image

Modification 13: image image

Modification 14: image image

Modification 15: image image

Modification 16: image image

Modification 17: image image

Modification 18: image image

Modification 19: image image

Modification 20: image image

Modification 21: image

Modification 22: image


Alongside training these neural networks, I tried to apply them to a regular swarm for the first time. These swarms are ten times bigger than the small swarm I labeled, and I hope to eventually train a network that will be able to label one of these swarms well, though they are inherently more challenging to label; due to the nature of how the data is collected, the bees along the outer edges of the swarm tend to be elongated and sometimes blend together, making them hard to identify and remain separate. You can see below, that when I apply my networks to these swarms, the networks perform better in the center of the swarm than these outer areas.

Modification 1.27: image

Modification 7: image

Modification 9: image

Written on August 2, 2024