Over the course of the year and the progression of the project, a battery of tests were performed. These are all recorded internally for documentation's sake; however, they are not included here for brevity's sake. Nearly 100 distinct networking tests were run, with VM and disk speed tests far exceeding this number.
Some of the largest tests run were at-scale tests or tests to ensure the performance of the system could work under the correct hardware configurations. In order to do this, two types of external tests were run.
With gigabit ethernet being the critical fail point of the system, it was necessary to determine whether or not this bottleneck could be overcome under the right conditions. Thus, testing with 10GBe connections at RIT's Research Computing department commenced. Using the same system between a couple of nodes (the entire cluster could not be equipped), testing commenced under the faster infrastructure.
Using this system, performance increased significantly. Since nodes are processing faster than they were being delivered information, the system was now only bottlenecked by data storage and computing power of the nodes. Overall performance hovered between 36-40 frames per second. This test in and of itself proves the viability of the system which is a significant accomplishment.
In order to test the scalability of the project, one last series of tests were designed.
Purchasing space in an enterprise datacenter provided the best combination of processing, storage and networking. Twenty individual VM nodes were purchased in a DigitalOcean datacenter with 40GBe networking, Xeon processing architecture and Solid State disk storage. Using these setups, the project was able to process signfiicantly faster, with speeds of up to 60 frames per second. This was done testing only a small amount of frames - 120 to be precise - due to the expense and bandwidth charges associated. This test was completed merely to test the scalability and reliability of this software and project at full scale.