Duren, Russell Walker.Dark, Stephen L.Baylor University. Dept. of Electrical and Computer Engineering.2011-01-052011-01-052010-122011-01-05http://hdl.handle.net/2104/8076Includes bibliographical references (p. ).Two ways to improve algorithm performance in hardware are increasing the speed of each operation, or performing multiple operations simultaneously. However, the percent speed-up for the latter depends upon not only system constraints but also design decisions. When using multiple FPGAs as the implementation target, creating an optimal configuration requires the designer to be aware of many potential issues. A neural network inversion case study is presented in order to give future FPGA algorithm designers insight into the possible problems arising from parallel FPGA implementations. Initial work is performed implementing a large Neural Network and finding its inversion via Particle Swarm Optimization on a single FPGA. This algorithm is later broken up and performed in parallel with multiple FPGAs using several strategies on various hardware and software architectures. At the end, a discussion of the potential issues that arose during these implementations is presented along with some generalized guidelines.221329 bytes5064358 bytesapplication/pdfapplication/pdfen-USBaylor University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. Contact librarywebmaster@baylor.edu for inquiries about permission.FPGA.Parallel processing.Hardware architectures.LabVIEW.EDK.Neural network.Particle swarm.Increased computation using parallel FPGA architectures.ThesisWorldwide access