Project

# Title Team Members TA Documents Sponsor
23 Portable RAW Reconstruction Accelerator for Legacy CCD Imaging
Arnav Gaddam
Guyan Wang
Yuhong Chen
Gerasimos Gerogiannis design_document1.pdf
other1.docx
other2.pdf
# **RFA: Portable RAW Reconstruction Accelerator for Legacy CCD Imaging**

Group Member: Guyan Wang, Yuhong Chen

## **1\. Problem Statement**

**The "Glass-Silicon Gap":** Many legacy digital cameras (circa 2000-2010) are equipped with premium optics (Leica, Zeiss, high-grade Nikon/Canon glass) that outresolve their internal processing pipelines. While the optical pathway is high-fidelity, the final image quality is bottlenecked by:

- **Obsolete Signal Chains:** Early-stage Analogue-to-Digital Converters (ADCs) and readout circuits introduce significant read noise and pattern noise.
- **Destructive Processing:** In-camera JPEGs destroy dynamic range and detail. Even legacy RAW files are often processed with rudimentary demosaicing algorithms that fail to distinguish high-frequency texture from sensor noise.
- **Usability Void:** Users seeking the unique "CCD look" are forced to rely on cumbersome desktop post-processing workflows (e.g., Lightroom, Topaz), preventing a portable, shoot-to-share experience.

## **2\. Solution Overview**

**The "Digital Back" External Accelerator:** We propose a standalone, handheld hardware device-a "smart reconstruction box"-that interfaces physically with legacy CCD cameras. Instead of relying on the camera's internal image processor, this device ingests the raw sensor data (CCD RAW) and applies a hybrid reconstruction pipeline.

The core innovation is a **Hardware-Oriented Hybrid Pipeline**:

- **Classical Signal Processing:** Handles deterministic error correction (black level subtraction, gain normalization, hot pixel mapping).
- **Learned Estimator (AI):** A lightweight Convolutional Neural Network (CNN) or Vision Transformer model optimized for microcontroller inference (TinyML). This model does not "hallucinate" new details but acts as a probabilistic estimator to separate signal from stochastic noise based on the physics of CCD sensor characteristics.

The device will feature a touchscreen interface for file selection and "film simulation" style filter application, targeting an output quality perceptually comparable to a modern full-frame sensor (e.g., Sony A7 III) in terms of dynamic range recovery and noise floor.

## **3\. Solution Components**

### **Component A: The Compute Core (Embedded Host)**

- **MCU:** STMicroelectronics **STM32H7 Series** (e.g., STM32H747/H757).
- _Rationale:_ Dual-core architecture (Cortex-M7 + M4) allows separation of UI logic and heavy DSP operations. The Chrom-ART Accelerator helps with display handling, while the high clock speed supports the computationally intensive reconstruction algorithms.
- **Memory:** External SDRAM/HyperRAM expansion (essential for buffering full-resolution RAW files, e.g., 10MP-24MP) and high-speed QSPI Flash for AI model weight storage.

### **Component B: Connectivity & Data Ingestion Interface**

- **Physical I/O:** USB OTG (On-The-Go) Host port.
- _Function:_ The device acts as a USB Host, mounting the camera (or the camera's card reader) as a Mass Storage Device to pull RAW files (.CR2, .NEF, .RAF, .DNG).
- **Storage:** On-board MicroSD card slot for saving processed/reconstructed JPEGs or TIFFs.

### **Component C: Hybrid Reconstruction Algorithm**

- **Stage 1 (DSP):** Linearization, dark frame subtraction (optional calibration), and white balance gain application.
- **Stage 2 (NPU/AI):** A quantization-aware trained model (likely TFLite for Microcontrollers or STM32-AI) trained specifically on _noisy CCD -to- clean CMOS_ image pairs.
- _Task:_ Joint Demosaicing and Denoising (JDD).
- **Stage 3 (Color):** Application of specific "Film Looks" (LUTs) selected by the user via the UI.

### **Component D: Human-Machine Interface (HMI)**

- **Display:** 2.8" to 3.5" Capacitive Touchscreen (SPI or MIPI DSI interface).
- **GUI Stack:** TouchGFX or LVGL.
- _Workflow:_ User plugs in camera -> Device scans for RAWs -> User selects thumbnails -> User chooses "Filter/Profile" -> Device processes and saves to SD card.

## **4\. Criterion for Success**

To be considered successful, the prototype must meet the following benchmarks:

- **Quality Parity:** The output image, when blind-tested against the same scene shot on a modern CMOS sensor (Sony A7 III class), must show statistically insignificant differences in perceived noise at ISO 400-800 equivalent.
- **Edge Preservation:** The AI reconstruction must demonstrate a reduction in color moiré and false-color artifacts compared to standard bilinear demosaicing, without "smoothing" genuine texture (measured via MTF charts).
- **Latency:** Total processing time for a 10-megapixel RAW file must be under **15 seconds** on the STM32 hardware.
- **Universal RAW Support:** Successful parsing and decoding of at least two major legacy formats (e.g., Nikon .NEF from D200 era and Canon .CR2 from 5D Classic era).

## **5\. Alternatives**

- **Desktop Post-Processing (Software Only):**
- _Pros:_ Infinite computing power, established tools (DxO PureRAW), highly customized.
- _Cons:_ Destroys the portability of the photography experience; cannot be done "in the field." Need to be proficient with parameters inside the software, which requires self-training and tutoring (not user-friendly).
- **Smartphone App (via USB-C dongle):**
- _Pros:_ Powerful processors (Snapdragon/A-Series), high-res screens, easy to use.
- _Cons:_ Lack of low-level control over USB mass storage protocols for obscure legacy cameras; high friction in file management; operating system overhead prevents bare-metal optimization of the signal pipeline; unique algorithms may not be suitable for legacy cameras.
- **FPGA Implementation (Zynq/Cyclone):**
- _Pros:_ Parallel processing could make reconstruction instant.
- _Cons:_ Significantly higher complexity, cost, and power consumption compared to an STM32 implementation; higher barrier to entry for a "mini project."

Waste Bin Monitoring System

Benjamin Gao, Matt Rylander, Allen Steinberg

Featured Project

# Team Members:

- Matthew Rylander (mjr7)

- Allen Steinberg (allends2)

- Benjamin Gao (bgao8)

# Problem

Restaurants produce large volumes of waste every day which can lead to many problems like overflowing waste bins, smelly trash cans, and customers questioning the cleanliness of a restaurant if it is not dealt with properly. Managers of restaurants value cleanliness as one of their top priorities. Not only is the cleanliness of restaurants required by law, but it is also intrinsically linked to their reputation. Customers can easily judge the worth of a restaurant by how clean they keep their surroundings. A repulsive odor from a trash can, pests such as flies, roaches, or rodents building up from a forgotten trash can, or even just the sight of a can overflowing with refuse can easily reduce the customer base of an establishment.

With this issue in mind, there are many restaurant owners and managers that will likely purchase a device that will help them monitor the cleanliness of aspects of their restaurants. With the hassle of getting an employee to leave their station, walk to a trash can out of sight or far away, possibly even through external weather conditions, and then return to their station after washing their hands, having a way to easily monitor the status of trash cans from the kitchen or another location would be convenient and save time for restaurant staff.

Fullness of each trash can isn’t the only motivating factor to change out the trash. Maybe the trash can is mostly empty, but is extremely smelly. People are usually unable to tell if a trash can is smelly just from sight alone, and would need to get close to it, open it up, and expose themselves to possible smells in order to determine if the trash needs to be changed.

# Solution

Our project will have two components: 1. distributed sensor tags on the trash can, and 2. A central hub for collecting data and displaying the state of each trash can.

The sensor tags will be mounted to the top of a waste bin to monitor fullness of the can with an ultrasonic sensor, the odor/toxins in the trash with an air quality/gas sensor, and also the temperature of the trash can as high temperatures can lead to more potent smells. The tags will specifically be mounted on the underside of the trash can lids so the ultrasonic sensor has a direct line of sight to the trash inside and the gas sensor is directly exposed to the fumes generated by the trash, which are expected to migrate upward past the sensor and out the lid of the can.

The central hub will have an LCD display that will show all of the metrics described in the sensor tags and alert workers if one of the waste bins needs attention with a flashing LED. The hub will also need to be connected to the restaurant’s WiFi.

This system will give workers one less thing to worry about in their busy shifts and give managers peace of mind knowing that workers will be warned before a waste bin overflows. It will also improve the customer experience as they will be much less likely to encounter overflowing or smelly trash cans.

# Solution Components

## Sensor Tag Subsystem x2

Each trash can will be fitted with a sensor tag containing an ultrasonic sensor transceiver pair, a hazardous gas sensor, a temperature sensor, an ESP32 module, and additional circuitry necessary for the functionality of these components. The sensors will be powered with 3.3V or 5V DC from a wall adapter. A small hole will need to be drilled into the side of each trash can to accommodate the wall adapter output cord. They may also need to be connected to the restaurant’s WiFi.

- 2x ESP32-S3-WROOM

https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N16R2/16162644

- 2x Air Quality Sensor (ZMOD4410)

https://www.digikey.com/en/products/detail/renesas-electronics-corporation/ZMOD4410AI1R/8823799

- 2x Temperature/Humidity Sensor(DHT22)

https://www.amazon.com/HiLetgo-Digital-Temperature-Humidity-Replace/dp/B01DA3C452?source=ps-sl-shoppingads-lpcontext&ref_=fplfs&psc=1&smid=A30QSGOJR8LMXA#customerReviews

- 2x Ultrasonic Transmitter/Receiver

https://www.digikey.com/en/products/detail/cui-devices/CUSA-R75-18-2400-TH/13687422

https://www.digikey.com/en/products/detail/cui-devices/CUSA-T75-18-2400-TH/13687404

## Central Hub Subsystem

The entire system will be monitored from a central hub containing an LCD screen, an LED indicator light, and additional I/O modules as necessary. It will be based around an ESP32 module connected to the restaurant’s WiFi or ESPNOW P2P protocol that communicates with the sensor tags. The central hub will receive pings from the sensor tags at regular intervals, and if the central hub determines that one or more of the values (height of trash, air quality index, or temperature) are too high, it will notify the user. This information will be displayed on the hub’s LCD screen and the LED indicator light on the hub will flash to alert the restaurant staff of the situation.

- 1x ESP32-S3-WROOM

https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N16R2/16162644

- 1x LCD Screen

https://www.amazon.com/Hosyond-Display-Compatible-Mega2560-Development/dp/B0BWJHK4M6/ref=sr_1_4?keywords=3.5%2Binch%2Blcd&qid=1705694403&sr=8-4&th=1

# Criteria For Success

This project will be successful if the following goals are met:

- The sensor tags can detect when a trash can is almost full (i.e. when trash is within a few inches of the lid) and activate the proper protocol in the central hub.

- The sensor tags can detect when an excess of noxious fumes are being produced in a trash can and activate the proper protocol in the central hub.

- The sensor tags can detect when the temperature in a trash can has exceeded a user-defined threshold and activate the proper protocol in the central hub.

- The central hub can receive wireless messages from all sensor tags reliably and correctly identify which trash cans are sending the messages.

Project Videos