Skip to main content

Capstone 2021 Electrical & Computer Engineering

Acoustic Detection Partial Discharge

Kai Arsenault, Thomas Casey, Michael Leblanc, Mateus Millard
Faculty Advisor: James McCusker

Sponsored by Eversource

Partial discharge (PD) is a localized breakdown of a solid or fluid insulation system under high voltage. Partial discharge can be broken into three different categories tracking, corona, and surface with the latter producing a visible arc of electricity. If left untreated, partial discharge can lead to the erosion of insulation systems and the failure of important high voltage systems. Due to the cost involved in fixing these failures, there are several methods of detecting partial discharge early enough to perform preventative maintenance. Currently available methods include on-line measurement, ultrasonic recording, and radio wave reading. The purpose of this project is to develop a method using a short-time Fourier transform and continuous wavelet transform to analyze audio recordings around insulation systems to find evidence of partial discharge in both surface and corona form. Keywords: Partial discharge, acoustic, detection

 

View the PDF for Acoustic Detection Partial Discharge

Acoustic Detection Partial Discharge Project Video

Watch the Acoustic Detection Partial Discharge video here!

Acoustic Detection Partial Discharge Project Video Watch on YouTube

APA Bot: Automated Precision Agricultural Device for Monitoring Sustainable Farming Systems 

Kaycee Salgueiro, Andy Wagner
Faculty Advisor: Filip Cuckov

'The APA bot is designed to allow for easier monitoring of more complex farming systems, such as agroforestry and intercropping. Designed to address the need for more sustainability in the farming industry, our device reduces the workload associated with them by autonomously traversing through a farming field while gathering environmental data on the soil and crops that would otherwise need to be gathered manually.  Currently, most large-scale farms use monocropping which over a period of 10 years or longer which cause significant damage to the land and depletes the soil of nutrients and bacteria rendering it infertile. The biggest hesitation with farmers changing to more sustainable methods is the increased complexity and labor associated with multi-crop farming systems. The APA bot uses an Arduino controlled sensor network to monitor the environment and soil of the farm and a camera for weed identification. The sensor network incorporates a barometer, temperature, humidity, and soil moisture sensor. This data is then sent to a user interface where the user can view raw data and warnings if the sensor data falls into extreme high or low ranges. The weed identification portion of the device uses a Raspberry Pi single board computer and a camera for image processing. The software used to program the machine learning algorithm for identifying weeds with the camera was “You Only Look Once” (YOLOv3). This real-time detection software trained the camera using a custom dataset. With its precision obstacle avoidance, the APA bot successfully and accurately reports farm environmental data.

 

View the PDF for APA Bot: Automated Precision Agricultural Device for Monitoring Sustainable Farming Systems 

APA Bot: Automated Precision Agricultural Device for Monitoring Sustainable Farming Systems  Project Video

Watch the APA Bot: Automated Precision Agricultural Device for Monitoring Sustainable Farming Systems  video here!

APA Bot: Automated Precision Agricultural Device for Monitoring Sustainable Farming Systems  Project Video Watch on YouTube

Automated Visual Inspection Machine

Daniel Arpide, Tim Miller, Gabriel Pauta
Faculty Advisor: Aaron Carpenter

Current automated vision inspection (AVI) systems have low adaptability and rely too much on human interaction. AVI machines either focus on a specific printed circuit board (PCB) or a specific component. The dull repetitive routine of manually checking PCBs can lead to errors in the inspection, so skilled workers must rotate shifts every two hours. A company having to constantly rotate its workers and owning a machine that does not inspect a wide variety of PCBs or components results in low throughput, wasted time, and a loss of money. Through this project, an AI-integrated AVI machine will be developed and tested for a leading global PCB manufacturer to match high-speed production with high-speed inspection. The use of conveyance, cameras, and AI image processing software will allow many PCB defects to be inspected and documented. By creating a large data set of labeled PCB images within the AI platform, adaptability will raise significantly and reduce human interaction due to the algorithms' ability to improve over time. This single AVI machine will inspect multiple types of PCBs through a single piece of software, unlike other machines that require different algorithms for each type of PCB. The proposed AVI machine will be run by a main computer that will receive images from cameras and will send signals to a programable logic controller (PLC) that, in turn, will send and receive digital signals to conduct proper conveyance. If the algorithm identifies a damaged board, a stack light will display a red light and sound an alarm to notify an operator of an error. The operator will then make the final decision of whether the board will be disposed of or not. Upon its ability to maintain and/or surpass the current performance of the best inspection technician, the AVI machine will be integrated into the natural workflow of the company and will aim to reduce the given department's workload by 20%. Keywords: Artificial Intelligence, Printed Circuit Board, Machine Vision, Quality Assurance.

 

View the PDF for Automated Visual Inspection Machine

Automated Visual Inspection Machine Project Video

Watch the Automated Visual Inspection Machine video here!

Automated Visual Inspection Machine Project Video Watch on YouTube

Automatic Window

Jennifer Anitus, Wendy Lebron
Faculty Advisor: James McCusker

Despite government regulations and current designs implemented in residential and commercially sold windows, many of them still pose a range of safety concerns to consumers. These safety concerns range from the safeguarding of children to protection from home invasions/theft and interior damage from precipitation. Our goal is to address the defects current manufactured windows have through a design that implements a mechanical operation of windows alongside a variety of devices that detect rain and obstructions. Our design resulted in a window that operates simply by the push of a button, and by the interaction with rain/snow and motion.

 

View the PDF for Automatic Window

Canvoice

Noah Damergi, Aidan Nelson-Peck
Faculty Advisor: Aaron Carpenter

Canvoice is a program that allows users to digitally design images using their voice. No keyboard or mouse is needed when using the program. For an example, let’s say the user would want to draw a square. To prevent the program from making assumptions on user choice, the user can choose color, dimensions and the location on the program’s canvas where they would like to place the square. One possible phrase could be, “Draw a cyan square with length 45, at 250 and 300.” Canvoice uses speech recognition technology to transcribe the phrase, and reads the respective variables required for each command. Once read, it outputs the shape using digital art functions based on the variables it was given. To help create Canvoice, our group researched Node.js, an opensource environment that uses JavaScript. The speech to text and canvas functions were created with the help of frameworks published on Node.js.

 

View the PDF for Canvoice

Canvoice Project Video

Watch the Canvoice video here!

Canvoice Project Video Watch on YouTube

CHUCHU - Cognitive Hand Unit for Collaboration with Human Users

Jacob Schlosser
Faculty Advisor: Filip Cuckov

CHUCHU, an open-source robotic gripper, is designed to replicate the form and function of a human hand for collaborative robotics research. Nearly all of CHUCHU’s mechanical parts are 3D-printable, except for the screws/bolts needed to connect said parts. CHUCHU possesses four over-actuated digits (three fingers and a thumb), each with a tactile sensor at their end. Each of these tactile sensors is comprised of an array of pressure sensors, encased in a fingertip-shaped capsule of urethane rubber. An integrated, lightweight machine learning solution allows CHUCHU to gain a better grasp of how to hold different objects over time, improving its grip with each attempt. Originally designed as an upgrade for ReThink Robotics’ Baxter, CHUCHU’s software has been modified to allow it to function as an independent system; its architecture also makes it capable of integrating with any existing robotic system that utilizes ROS (Robot Operating System). Several of CHUCHU’s mechanical parts have also been redesigned, with the goal of making them significantly easier to 3D print. Following the project’s conclusion, all information needed to replicate a working CHUCHU unit will be released as open source to the public, under the GNU 3.0 license. Keywords: robotics, robotic hand, robotic gripper, machine learning, collaborative robotics, open source, GNU 3.0, Robot Operating System, ROS

 

View the PDF for CHUCHU - Cognitive Hand Unit for Collaboration with Human Users

CHUCHU - Cognitive Hand Unit for Collaboration with Human Users Project Video

Watch the CHUCHU - Cognitive Hand Unit for Collaboration with Human Users video here!

CHUCHU - Cognitive Hand Unit for Collaboration with Human Users Project Video Watch on YouTube

Cluster of Recycled Devices

Chandler Berry, Adrian Boodoosingh, Marcus Drab
Faculty Advisor: Aaron Carpenter

The goal of this project is to increase access to low-cost computational power while at the same time reducing the global volume of electronic waste by maximizing the usable lifespan of consumer devices which would otherwise be thrown away. This is accomplished by creating a high performance cluster of used smartphones and laptops to provide essential services to potential users such as non-profit organizations and schools. Containerization technology is used to ensure the cluster can provide a wide array of open source services while being flexible enough to support various recycled devices. Organizations can host a cluster locally for increased security, or access a cluster based in the cloud to maximize availability. Any device with an internet connection can access the cluster using a powerful web interface. By creating clusters comprised of recycled/discarded devices, it is possible to give working systems to communities in need, while tackling the issue of e-waste through a new method of recycling. These clusters would be built using several recycled cellphones and X86 computers. Ideally, the cluster will be platform-agnostic, enabling a variety of devices to be used for this purpose. These systems are scalable, self-sustaining, cost-effective, and can be used in multiple scenarios all while reducing the amount of e-waste going into landfills by creating a new method of recycling electronics. Keywords: E-Waste, Recycling, Cluster Computing, Linux, Containers

 

View the PDF for Cluster of Recycled Devices

Cluster of Recycled Devices Project Video

Watch the Cluster of Recycled Devices video here!

Cluster of Recycled Devices Project Video Watch on YouTube

Coral Sensor Hub

Michaella Gomes, Savannah Wilkinson
Faculty Advisor: Aaron Carpenter

'Climate change is continuously changing many of the ecosystems on Earth. One of the ecosystems that are highly affected are the coral reefs. Global warming and pollution can cause coral to become weak which could lead to them becoming diseased. If these conditions are persistent they will cause the reefs to go extinct. If the coral keeps dying at this rate, it will lead to the fish population decreasing. Hence, breaking the food chain for various species in the ocean. The decrease in fish life can cause humans to lose a source of food. Coral are a natural protection against the waves for the coastline. By using their three-dimensional structure they absorb the energy from the waves. If they begin to weaken then they will not be strong enough to take on the waves of the ocean. Therefore, finding a way to monitor the health of the coral reefs is important. This device would be used to gather data on the health of the coral and its ecosystem to better help researchers find what the coral reefs need to help them heal. The device will be able to float above a reef inside a buoy. It will collect data using sensors and wirelessly report the data to the user’s computer then into an excel sheet. If anything goes wrong, then the Sensor Hub will alert the user, so they do not have to watch the data constantly. This design will make it easier for researchers to take care of coral.

 

View the PDF for Coral Sensor Hub

Coral Sensor Hub Project Video

Watch the Coral Sensor Hub video here!

Coral Sensor Hub Project Video Watch on YouTube

Design for Detection of Card Skimming Devices

Stephen Lamoretti, Joshua Ofirih, Karan Patel
Faculty Advisor: Aaron Carpenter

Card skimming is a major problem that needs to be addressed. Card skimming is a common type of security fraud that uses a skimmer to illegally gather data from the magnetic stripe of a bank card. The collected data is then used by the criminals to withdraw money from the bank account of the cardholder. In this paper, a proposed solution to this problem is the development of a closed-loop system, comprised of an Image Processing, and Bluetooth Detection device that converges together within a computer program that then uses the data given to determine whether a system has been compromised. The first step would be image processing which will scan the ATM or credit card terminal using a Python-based software application and a camera system to check for any physical additions. If the Image Processing method captures an addition, then a Bluetooth Detector is used to scan for a skimmer using Bluetooth signals. If the Bluetooth signal detects a signal, then it will alert the owner with a message noting there is a skimmer installed. The concepts of cybersecurity, software engineering, hardware security play a big factor in designing the solution.

 

View the PDF for Design for Detection of Card Skimming Devices

Design for Detection of Card Skimming Devices Project Video

Watch the Design for Detection of Card Skimming Devices video here!

Design for Detection of Card Skimming Devices Project Video Watch on YouTube

Digital Recording of Indoor Spaces

Misam Farsab, Roan Farsab, Samih Oumar
Faculty Advisor: Kai Ren

The growing need to digitally record indoor spaces of aging buildings for renovation and preserving historical sites combined with the significant advancement in 3-dimensional (3D) capturing technologies, object recognition, and viewing 3D scenery on virtual reality (VR) headsets: we propose a solution that utilize Stereo camera to effectively capture accurate depth and color data for an immersive 3D reconstruction of indoor spaces. This approach will allow for high resolution depth mapping, accurate geometric calculations, and extract color information allowing identification of objects that differ in color as well as shape. The disadvantage of using this technology will be computational extensive system. Thus, we will be designing a system that captures the raw data in real-time and off-line processing. Then, image-based machine learning object recognition will be used to enhance the 3D reconstruction process furthermore as well as assisting the documentation task for building renovation purpose.

 

View the PDF for Digital Recording of Indoor Spaces

Electric Power Generation to Meet Demand at Wentworth Institute of Technology

Sean P. Prendergast, Patrick W. Collins, Abbas M. Haider
Faculty Advisor: Douglas Dow

'Regulatory bodies are setting targets for companies and institutions to become carbon neutral in their energy consumption. Wentworth Institute of Technology (WIT) has a historically developed power system of supply, generation, and consumption. For WIT to improve and strategically move toward the carbon neutral energy targets, better understanding of the current system’s supply and demand balance is necessary. The understanding would enable exploration of renewable energy generation on campus. The purpose of this project was to investigate, document and model the campus electrical supply and demand and to explore possible renewable energy on campus. A Simulink model was made to simulate the currently existing and proposed possible extensions for renewable photovoltaic (PV) panels. The existing electrical power supply includes input from the utility grid, a natural gas-powered co-generator, and a small bank of PV panels on the roof of one building. Power consumption for the campus includes the electrical demand across the campus. A goal was to explore whether 20-25% of the campus electrical demand could be supplied with renewable energy generated on the campus. The main modeling software was MATLAB. The campus is currently supplied with roughly 60% of electricity from the utility grid, 40% from the co-generator, and less than 0.1% from the existing solar PV panels. The proposed system of a new canopy of PV panels over an existing large parking lot would bring the amount of renewable energy generation on campus up to roughly 20% of demand, dropping the amount of energy supplied by the utility grid from 60% to 40%. This proposed system would reduce the carbon footprint of the campus and help move towards the carbon neutral targets.

 

View the PDF for Electric Power Generation to Meet Demand at Wentworth Institute of Technology

Enhancing the Accessibility and Equity of Voting

Andrew Lin, Wyatt Phillips, Andre Thomas
Faculty Advisor: Aaron Carpenter

New and innovative forms of accessible voting are needed now more than ever, specifically in the United States. The past election held in 2020 has shown that the current methods of voting are unnecessarily time-consuming, contain risks of spreading COVID-19, and cause stress on the US Postal Service in areas that allow mail-in ballots. The goal of the project is to eliminate or alleviate the problems that plague the current voting systems by increasing accessibility for those who do not have time or resources to vote through the current methods. This will also help those who are in an area where their identifying group has disproportional wait times. Secondarily, this will also make voting more equitable as more people will have the same fair chance of voting. A possible outcome that will allow these goals to be reached would be the use and implementation of an online voting solution that is run by a third party and marketed towards state governments. This solution would solve the current infrastructure problems by reducing the traffic on existing procedures and redirecting it to an efficient online and paperless method. Many online voting proposals in the past have been pitched and failed due to various reasons, most of them citing the security risk of voting digitally or attempting to solve different problems like efficiency in counting ballots. However, using online registration, human confirmed document confirmation, and encryption, these potential risks will be minimized and could even be more secure than the current methods that are set in place.

 

View the PDF for Enhancing the Accessibility and Equity of Voting

Enhancing the Accessibility and Equity of Voting Project Video

Watch the Enhancing the Accessibility and Equity of Voting  video here!

Enhancing the Accessibility and Equity of Voting Project Video Watch on YouTube

Eye Hear You Speak

John Glasscock, Matthew Lima, Dylan Key
Faculty Advisor: Aaron Carpenter

Eye Hear You Speak is a cloud-based Android application that provides users with live conversation transcription combined with speaker identification. This application is meant to assist the hard of hearing in social settings to enable them to actively participate in conversations confidently. This is being executed by utilizing speaker diarization, overlapping speech separation, speaker embedding clustering, and Google Cloud services. With all these elements paired together the result is an application that will provide the user with a transcription of the conversation with the labeling of speaker identification, and all within a timely manner to ensure the best conversation experience for the end user.

 

View the PDF for Eye Hear You Speak

Eye Hear You Speak Project Video

Watch the Eye Hear You Speak video here!

Eye Hear You Speak Project Video Watch on YouTube

Guidance System for Oil and Gas Industry Workers

Andi Elshani, Sgardy Pena, Qasim Alahmed
Faculty Advisor: James McCusker

Hydrogen Sulfide is the most common toxic gas found in the oil and gas industry. Gas leaks that occur in the gas and oil industry lead to fatalities and injuries. Even though most of the oil and gas industrial plants have already some kind of gas leak detection system, they do not provide any guidance for the workers to safely evacuate. This solution proposes a distributed wired gas leak sensing detection system and wireless low powered screen device for guidance. Wired detection systems are durable and reliable. There are no energy saving restriction and no need to recharge or replace batteries for weird systems.

 

View the PDF for Guidance System for Oil and Gas Industry Workers

Hazardous Gas Monitor

Collin Dos Reis, Steven Nickerson
Faculty Advisor: Kai Ren

The Hazardous Gas Monitor takes into consideration both safety and accountability. The problem is when trying to detect a natural gas leak from outside of your home, you need to rely on your natural instincts, such as listening, smelling, and looking. Now with the Hazardous Gas Monitor, you can rely on technology to keep you safe, along with your natural senses to detect a gas leak. The four gases this monitor detects are Carbon Monoxide (CO), Carbon Dioxide (CO2), Hydrogen Sulfide (H2S), and an extra sensor for Particulate Matter (Pollution).

 

View the PDF for Hazardous Gas Monitor

Historical VR – Digital Recording of Indoor Spaces

Brandon Merluzzo, Jenna Rice, Shafi Sheikh
Faculty Advisor: Kai Ren

Virtual reality (VR) technology has advanced significantly in recent years. New VR technology provides potential solutions to existing modern problems. An instance of a modern problem is the need to preserve indoor environments of aging historical buildings due to their cultural significance combined with the fact that physically maintaining these buildings can be both expensive and time consuming. Virtual indoor environments can be explored using a VR headset and the user can access it from another location. Current VR technology allows indoor spaces to be digitally recorded with high resolution and detail, therefore it can be applied to a variety of different applications. However, modern industry lacks an autonomous and versatile device to efficiently map interior spaces. Today’s methods of indoor mapping typically require human intervention which makes the process time consuming and expensive. The scope of this project is to develop autonomous device capable of spatial mapping indoor spaces for a reasonable cost and the capacity to be used for a diverse range of applications. The project consists of three main components which are a wheeled mobile platform (WMP), a stereo camera, and a Pan/Tilt Mechanism for the stereo camera. Each component would be a part of a larger system that would aim at autonomous digital recording. The complete system being able to autonomously move through a space, where the stereo camera would be continuously spatially mapping the space, being able to capture the full space by being rotated by the Pan/Tilt Mechanism. The system would then produce a textured 3-dimensional (3D) mesh of the recorded indoor space.

 

View the PDF for Historical VR – Digital Recording of Indoor Spaces

Locating Missing Planes Underwater

Peter Chau, Thomas Kirwan, Simon Trieu
Faculty Advisor: Kai Ren

Air travel may be one of the fastest and most efficient, but it is not 100 percent risk free. Accidents happen and may people die at the hands of planes. When planes crash, they are usually found swiftly if they crashed on land but when it comes to the water, it could be lost very easily. This could be seen in Malaysian Airlines flight 370 which vanished on the 8th of March and has yet to be found. As a result of this, it has created awareness for finding the location of planes that have gone missing underwater. The project aims to embed a computer onto the flight data recorders which will utilize lasers to transmit data to develop an underwater Wi-Fi system. With successful implementation of this embedded computer, it would make the lives of the search and rescue teams a lot easier.

 

View the PDF for Locating Missing Planes Underwater

Locating Missing Planes Underwater Project Video

Watch the Locating Missing Planes Underwater

 video here!

Locating Missing Planes Underwater Project Video Watch on YouTube

Mental Nest

Ryan Gemos, Sterling Pilkington, Naomi Torre Cardenas
Faculty Advisor: Aaron Carpenter

A person’s mental health affects every aspect of their life. As the world is recovering from Coronavirus Disease 2019 (COVID-19), the mental health of isolated hospital patients, their families, and healthcare workers is becoming more important than ever. During this time, many hospitals throughout the world have turned to services such as Zoom for communication. However, these services can be expensive and difficult to implement. They also do not contribute to the quality of care, and families may still feel anxious. As for healthcare workers, poor working conditions can cause these staff members to feel burnout. Overall, these three parties are experiencing a decline in mental health and current telehealth solutions are not enough to improve this. The current proposal for Mental Nest is a mobile application available on iOS and Android. Each user type will have several features that provide mental health resources and allow each user to communicate with others going through the same experience. For example, all three user types can join online communities or chat anonymously about their experiences. With the patient’s permission, a nurse can also release the patient’s medical information to family members. To keep all this data accessible to users, it will all be stored in Firebase. With Mental Nest implemented in hospitals, its features should help improve patient mental health, give a sense of control, reduce the anxiety experienced by family members and help improve the healthcare worker experience.

 

View the PDF for Mental Nest

Mental Nest Project Video

Watch the Mental Nest video here!

Mental Nest Project Video Watch on YouTube

Museum Security Guard Assistant Tool

Austin Lum, William Nguyen
Faculty Advisor: Aaron Carpenter

'Many museums face the daunting thought of one of their expensive art pieces being stolen. In some cases, museums might not retrieve their valuable art pieces back and can only hope that it will be returned one day. The current state of security involves a closed-circuit television camera always watching the artwork. This tends to be inefficient because the security guard’s attention may be focused elsewhere. The cameras will only record the aftermath of the stolen artwork without any notifications prior. The proposed solution will incorporate CCTV cameras so that the security guard would be notified in advance rather than them viewing the aftermath. The program will integrate supervised learning and deep learning to improve its accuracy. Another proposed solution would use a type of spot the difference. The video feed would be placed side-by-side with an image of the object of interest and notify when the object cannot be seen.

 

View the PDF for Museum Security Guard Assistant Tool

Mental Nest Project Video

Watch the Mental Nest video here!

Mental Nest Project Video Watch on YouTube

Photodynamic Therapy Device

Preston Watson, Michael Kearns, Garry Ingles
Faculty Advisor: Filip Cuckov

Oral cancer accounts for about thirty percent of cancers in India, placing it second in the world for diseases of this nature. The rural areas of India, in which oral cancer is most prevalent, typically, do not have access to oral cancer treatment due to the lack of medical infrastructure, leading to low treatment outcomes. Photodynamic therapy is a non-invasive treatment that photosensitizes cancer cells and destroys them by exploiting the photochemical properties of the sensitized tissue with proper irradiance of a light source. Our solution is a device that prioritizes the battery life, portability, and reliability to provide resource limited communities with adequate health care outcomes. Building upon previous hardware to enable this functionality, our work focuses on the development of the software stack required to properly utilize all custom manufactured components. These custom components consist of an embedded microcontroller on a motherboard with a proportional integral derivative circuit, battery charging circuit, a daughter board for laser control, an external liquid crystal display, and additional components for serial communication and interfacing. Our software stack implementation consists of three levels: device drivers that enable the communication between hardware, a real time operating system that manages the board’s resources and serves as middleware, and a graphical user interface that enables user navigation throughout the system. The integration of the software stack has prepared the device for clinical and lab trials and implements a framework to spur further development. Keywords: Photodynamic therapy, Oral Cancer, Software Stack, Device Drivers, Real Time Operating System.

 

View the PDF for Photodynamic Therapy Device

Photodynamic Therapy Device Project Video

Watch the Photodynamic Therapy Device video here!

Photodynamic Therapy Device Project Video Watch on YouTube

Prevention of Musculoskeletal Injuries in Nurses

Meaghan Silverman, Anna White
Faculty Advisor: James McCusker

The purpose of this project was to reduce the workload for nurses by identifying one of the biggest obstacles they have to face in order to do their jobs. This was determined to be back injuries, as nurses have one of the highest workplace injury rates of any profession. These injuries largely come from lifting patients, specifically when they’ve fallen down their bed and need to be “boosted” back up. This is typically a two-person job, but with an overworked staff it often ends up falling on one person. This is where the injury happens, as the movement and angle of movement when it comes to lifting someone can be very awkward, and easily causes a back or other musculoskeletal injury. This is where the project comes in. This project utilizes an array of force sensor resistors underneath the patient to detect when the patient has slid down their bed, and it will alert nursing staff that they need to come to the room and operate the machine. Once a nurse has arrived, they will be able to operate the second half of the device. The second half is a motor operated spool that is attached to a sheet under the patient and will be able to pull the patient back up the bed for the nurse. Once the patient is in an appropriate position the sensors will let the nurse know it is time to stop the operation of the motor and their job is done. This creates a hands-off approach to boosting patients, which in turn helps reduce nurse injuries.

 

View the PDF for Prevention of Musculoskeletal Injuries in Nurses

Real-Time Robotic Control using Programmable Logic Controllers for Automated Warehouse Operations

Daniel Padula, Devin Taylor, Brandon Walsh
Faculty Advisor: Filip Cuckov

Shipping and delivery services have become increasingly important to today's economy. Now more than ever, consumers and businesses rely heavily on fast and reliable shipping, which drives the need to lower cost and increase efficiency. Automated warehouses solve these issues, cutting down on waste and labor costs. Although these warehouses are more efficient, they tend to be complex, expensive, and difficult to implement into already existing structures. The Logic Pallet and Vessel (LPV) seeks to fix these issues by automating the pallet rather than the entire warehouse environment, eliminating the need to retrofit existing warehouse layouts. The LPV utilizes a type of Programmable Logic Controller (PLC) called a PLCNext and a variety of sensors such as a nine-dimensional Inertial Measurement Unit, load cells, motor encoders, and an assortment of ultrasonic and infrared sensors. The incorporation of the PLCNext allows the design to be simple and robust. By processing raw sensor data on the PLCNext in real time, the anti-tipping and motion controls algorithms were created, which allows the pallet to safely navigate through any environment. Another unique feature to the LPV is that it possesses mecanum wheels, which combined with the newly developed movement controls, allows for full, 360 degrees of motion and maneuvering. The LPV design is simple and will meet any warehouse needs, providing a cheaper path into warehouse automation.

 

View the PDF for Real-Time Robotic Control using Programmable Logic Controllers for Automated Warehouse Operations

Real-Time Robotic Control using Programmable Logic Controllers for Automated Warehouse Operations Project Video

Watch the Real-Time Robotic Control using Programmable Logic Controllers for Automated Warehouse Operations video here!

Real-Time Robotic Control using Programmable Logic Controllers for Automated Warehouse Operations Project Video Watch on YouTube

Reforestation Drone Mapping Coverage System

Eric H. Spooner, Stavros C. Ioakimidis, Yasser Alghamdi
Faculty Advisor: Douglas Dow

Deforestation is impairing the quality and viability of life on Earth. Deforestation is caused directly by agricultural expansion, logging, strip mining, and urbanization. Deforestation is indirectly caused by forest fires, natural disasters, pollution, and mismanagement of natural resources. Reforestation efforts have been evolving over the past century to be more efficient and effective. Human planters can walk around and plant the seedlings or seeds by hand, but this process can be slow, inefficient, and dangerous. Modern technologies such as drones have been reported to be used for seeding. Reforestation has an opportunity to become more efficient through the usage of aerial vehicles such as drones, paired with a Geographical Information Systems (GIS). The purpose of this project was to develop and test a tracking system for drones, for their location, status, and area that was seeded. A Global Positioning System (GPS) was used to obtain location and related parameters. The prototype used a Raspberry Pi 4 (RPi) and a NEO-6M GPS Module. A custom Python program was developed to run on the RPi and transmit GPS and seeding status to a local gateway. For prototype testing, a WiFi hotspot was connected to, and a Windows laptop computer was used as the local gateway. Testing verified location and path formation. Further testing and development will be required toward making a complete drone GIS system. Such a system would help reforestation efforts.

 

View the PDF for Reforestation Drone Mapping Coverage System

Reforestation Drone Mapping Coverage System Project Video

Watch the Reforestation Drone Mapping Coverage System video here!

Reforestation Drone Mapping Coverage System Project Video Watch on YouTube

Sanavest

Andrew Campagna, Andrew Peterson, Jacob Christie
Faculty Advisor: Kai Ren

The hearing-impaired and deaf struggle with hearing all their lives. Some live with and deal with this disadvantage; others can repair their hearing with medical procedures and devices. The hearing impaired also have several devices that can help them hear again. From cochlear implants to bone implants, to full surgery of the ear, each one has a cost many nowadays would find too high. With cochlear implants, the users find difficulty listening to music. Speakers, both personal and concert, interfere with the devices and as a result cause discomfort. The following project aims to bridge the gap between the hearing impaired and music. The system, known as the Sanavest, is a wearable device to allow users to experience music in a tactile way. The device incorporates a microphone or an auxiliary input. These inputs are subsequently processed through a series of filters that separates the signal into different paths. These paths are further split into the left and right sides of the users back which incorporate the filtered parts of the music signal. By separating the frequencies and placing them in staggered position down the back, the wearer will be able to experience the depth of the music they wish to listen to.

 

View the PDF for Sanavest

Situational Awareness Device for Self Monitoring by Ambulatory Team for Health and Location

Kia Aalaei, Vincent Ciampa, Ryan Hobart
Faculty Advisor: Douglas Dow

'Groups of people walking or working outside in unfamiliar surroundings may become lost or separated from one another. Examples of such groups include soldiers, first responders, outdoor work teams, relief workers, wilderness firefighters, and arctic or alpine hikers. Available methods of communication include an Apple watch, walkie talkies, and Fitbits, but the desired group cohesion assistance would not be complete. For example, an Apple watch would track the health of the user and could communicate, but would not automatically display health and location of the other team members. A Fitbit tracks the health of the user but would not share with other team members. The purpose of this project was to develop and test a prototype that would help group cohesion by communicating each member’s health and location, and display that information to the team members. The modules that were used included the Arduino MEGA 2560, the Adafruit Ultimate GPS, a 3.5”TFT 320x480 + touchscreen Breakout Board, a Pulse Sensor Amped, and the nRF24L01+ Wireless Module transceivers that transmit and receive data. Health was monitored by heart rate. Location was monitored by latitude and longitude, and the bearing direction and distance was calculated for a selected team member. The information was then displayed on the touchscreen with an arrow for the bearing degree. For the graphic user interface, the library and program GUISlice was used. Early testing involved comparing the heart rate of the prototype to values of a standard pulse oximeter. Location was tested by comparing the coordinates from the prototype to those on Google maps using a smartphone. Further testing and development are required to complete the system. Such a system would help team cohesion and safety.

 

View the PDF for Situational Awareness Device for Self Monitoring by Ambulatory Team for Health and Location

Situational Awareness Device for Self Monitoring by Ambulatory Team for Health and Location Project Video

Watch the Situational Awareness Device for Self Monitoring by Ambulatory Team for Health and Location video here!

Situational Awareness Device for Self Monitoring by Ambulatory Team for Health and Location Project Video Watch on YouTube

Smart Staff Managing System Using Facial Recognition

Abderrahman Boukaa, Mohamed Lahlou
Faculty Advisor: Filip Cuckov

Commonly used employee tracking technologies may not be suitable for hospital environments due to the inherent risk they present to the health of the employees, such as fingerprint scanners which may aid in the transmission of viruses, or RFID badges which present cyber security risks. Alternatively, face recognition systems employ biometric software to identify a person from an image, and therefore as a contactless secure technology are better suited for use in hospital settings. We present a facial recognition system capable of keeping track of staff inside and outside hospital facilities, programmed in Python and integrating open-source computer vision libraries (OpenCV) with machine learning. Our solution first encodes user images used for identification, before them being subjected to face recognition procedures. The recognition technique consists of two algorithms that work simultaneously, one is responsible for face detection and the other one on face recognition. A 128-dimension face encoding is generated after the validated faces and encodings are loaded into the input picture. As a result, for each face identified, these identified images are then tracked in a database, which is an Excel spreadsheet with the specified time and name of the staff member.

 

View the PDF for Smart Staff Managing System Using Facial Recognition

Smart Staff Managing System Using Facial Recognition Project Video

Watch the Smart Staff Managing System Using Facial Recognition video here!

Smart Staff Managing System Using Facial Recognition Project Video Watch on YouTube

Softball Pitching Refinement System

Brionna Myers, Andrew Whyte
Faculty Advisor: James McCusker

Pitching related injuries in softball are scarcely researched and not supported as much when compared to their baseball counterparts. The false idea that softball pitching is a natural motion and does not cause injury has further suppressed the need for research. Poor pitching form in softball can lead to a series of devastating injuries such as shoulder, elbow, hip, and foot injuries. The softball pitching refinement system is a Unity program utilizing the Xbox One Kinect established for collecting motion data of the body during a pitch and displaying the recognized movement in a virtual three-dimensional environment. This representation allows a pitching coach to analyze a player’s pitch with a focus on key parts of the body to ensure the player maintains proper form throughout the pitch. Included in the recording, the angle of each elbow will be shown to further assist the user in correcting minute details. The added insight of how each key portion of the body is moving allows coaches to prevent the formation of bad pitching habits, thus decreasing the chances of future injuries.

 

View the PDF for Softball Pitching Refinement System

Tiregenie - A Web-based software solution to aid tire dealers to maximize their profits from tire incentive programs

Jesus Esgueva, Marc Ghannam
Faculty Advisor: Filip Cuckov

Tire shops buy tires from multiple wholesalers, which have year-long incentive programs. The programs offer rebates based on tire models and the number of tires sold. The rebates increase as sales ladder up. Tire dealers currently struggle to keep track of their progress and fail to take full advantage of the incentive programs. TireTutor wants to maximize their client’s profit by helping them make the most out of these incentive programs. We have created a tool that shows tire dealers their progress on all the incentive programs that they participate in. The tool was implemented into the TireTutor’s Customer Relationship Management (CRM) software, which is web-based and built with Next.js in TypeScript on the frontend, and Django in Python on the backend. The tool analyzes the tires purchased by wholesalers to calculate their progress and provides sale suggestions based on past sales to boost the incentives. All the tire manufacturer programs have different formats. Our algorithm breaks down the different incentive programs into one generalized program, which is then used to calculate the tire dealer’s progress and help them strategize to maximize their profits.

 

View the PDF for Tiregenie - A Web-based software solution to aid tire dealers to maximize their profits from tire incentive programs

Tiregenie - A Web-based software solution to aid tire dealers to maximize their profits from tire incentive programs Project Video

Watch the Tiregenie - A Web-based software solution to aid tire dealers to maximize their profits from tire incentive programs video here!

Tiregenie - A Web-based software solution to aid tire dealers to maximize their profits from tire incentive programs Project Video Watch on YouTube

UAV Signal Guard

Ji Yei Choi, Congni Shi, Xuanzhe Yang
Faculty Advisor: Aaron Carpenter

The UAVs employed for various functions across the globe, but they have the fundamental challenge in data insecurity. It is possible for external parties to intercept the communication between the transmitter and the receiver. Some more advanced UAV products use Frequency Hopping Spread Spectrum (FHSS) to transmit their control signal, which means the carrier frequency of the signal changes from time to time during the transmission process; in other words, the carrier frequency is “hopping”. Because of the unpredictability of its carrier frequency, FHSS protects the control signal to some extent. However, the frequency does not change randomly, it hops according to a hopping pattern, which needs to be transmitted wirelessly from the controller to the UAV. Before the controller sends a data packet to the UAV, it first sends a wireless signal that contains what carrier frequency the data packet is on. The fact that frequency hopping signal is transmitted wirelessly gives hackers the possibility to decode the hopping pattern or to fake a frequency hopping signal to override the control of the UAV. Our senior design team is trying to improve the security if the existing FHSS communication system to solve this problem. In our new designed, both the transmitter and receiver are equipped with Frequency Hopping (FH) signal generators. These FH signal generators are synchronized and are coded with the same algorithm to generate FH signals. In this manner, the receiver will generate an exact replica of the FH signal that the transmitter generates, so that the FH signal on longer needs to be transmitted wireless from the transmitter to the receiver. In addition, we also employed a cascaded LFSR configuration to make the frequency hopping pattern more unpredictable.

 

View the PDF for UAV Signal Guard

UAV Signal Guard Project Video

Watch the UAV Signal Guard video here!

UAV Signal Guard Project Video Watch on YouTube

Workplace Automated Comfort Control - Lighting

Alhassan Kareem, Peter Klembczyk, Thurein Myint
Faculty Advisor: Aaron Carpenter

Sponsored by EYP

Energy consumption from lighting, heating and cooling takes up about 45% of total energy consumption of an entire building. These factors also affect the productivity of the individuals working inside the building. Having a system where the individual can set up different kinds of lighting conditions to adapt to the luminance changes throughout the day and adjust automatically without any manual interference would reduce distractions to the workers as well as increase the comfort level which leads to optimized productivity. To achieve this system, multiple light sensors attached to microcontrollers would be used to record the lighting conditions and this data would be sent to the centralized processing unit. The centralized processing unit communicates with the UI (where the user sets their preferences) and the data storage (where every data is kept in databases) to adapt and send out commands to the light control system where changes to the lighting system can be made. To achieve an energy efficient and long-range method of communication, a mesh network is used in the wireless sensor network.

 

View the PDF for Workplace Automated Comfort Control - Lighting

Workplace Automated Comfort Control - Lighting Project Video

Watch the Workplace Automated Comfort Control - Lighting video here!

Workplace Automated Comfort Control - Lighting Project Video Watch on YouTube

Workplace Environment Comfort Control – HVAC

Andrew Farrell, Exhidio Gjuraj, Muhammad Sambo
Faculty Advisor: Filip Cuckov

Sponsored by EYP

For employees that spend their workday in an indoor office, inputting their environmental preferences makes them feel more comfortable at work and, research shows, their productivity levels rise. An occupant-centric system ensures that employees’ personal comfort settings are honored and only used spaces are conditioned, which reduces Heating, Ventilation, and Air Conditioning (HVAC) energy costs. Our solution was to develop and test a prototype system which monitors an office’s environmental conditions, takes user inputs via a user interface, and correlates data to control the HVAC system. The system utilizes temperature, humidity, air quality, and motion sensors and a radio frequency identifier controlled by an Arduino Nano microcontroller. The microcontroller reads the sensors every two minutes and packages the data into java script object notation, which is transmitted to the Application Processing Interface (API) via hypertext transmission protocol over a wireless connection. In the API, the users’ preferences are recalled from the database and the data is processed to drive a building automation control networks command to the HVAC system. Finally, the users can log into a website where they can see real-time performance of the system and give feedback about their comfort levels.

 

View the PDF for Workplace Environment Comfort Control – HVAC

Workplace Environment Comfort Control – HVAC Project Video

Watch the Workplace Environment Comfort Control – HVAC  video here!

Workplace Environment Comfort Control – HVAC Project Video Watch on YouTube