about
Some Project Briefs
Deep Neural Network
Implements a Convolutional Neural Network (CNN) to perform image classification tasks, likely utilizing a standard dataset such as CIFAR-10. It progresses from constructing a basic CNN architecture to implementing a deeper, more complex model inspired by ResNet to improve classification accuracy. The workflow includes setting up GPU acceleration for efficient processing, defining specific neural network layers (Convolutional, MaxPool, Dense), training the model on a training dataset, and evaluating its performance metrics on a separate test set.
Urban Shadow Analysis Using GPU Programming
This project leverages GPU computing (via CUDA) to perform high-performance shadow analysis on a high-resolution Digital Surface Model (DSM). It calculates precise shadow maps for specific solar positions (azimuth and elevation) corresponding to hourly intervals between 8:00 AM and 5:00 PM. The notebook iterates through these temporal steps to generate shadow rasters for each hour and compiles the outputs into an animated GIF, effectively visualizing the spatio-temporal patterns of urban shading throughout the day.
Land Cover Mapping with Machine Learning
This notebook applies traditional machine learning techniques to classify land cover types using high-resolution NAIP aerial imagery of Philadelphia. Utilizing the scikit-learn library, it employs a Support Vector Machine (SVM) classifier with a Radial Basis Function (RBF) kernel to categorize land use. The pipeline involves loading and preparing the geospatial data, splitting it into training and testing sets to validate the model, and finally generating a classified raster map that distinguishes between different land cover classes.
Vegetation Cover Mapping
This assignment focuses on processing geospatial raster data to analyze and map vegetation health at the census tract level, specifically for San Francisco. It calculates the Normalized Difference Vegetation Index (NDVI) from satellite imagery to quantify green vegetation density. The workflow integrates this raster analysis with vector census tract boundaries to compute zonal statistics (mean NDVI per tract) and generates visualizations that overlay vegetation metrics onto administrative boundaries to assess urban greenness equity.
Object Detection with Mask R-CNN
Implements an object detection and instance segmentation pipeline using the Mask R-CNN architecture. It focuses on detecting specific targets (specifically Pikachu) using a model pre-trained on the COCO dataset and fine-tuned on a custom, hand-annotated dataset from Roboflow. The notebook handles the setup of the Python environment, loads the model weights, runs inference on new images to generate bounding boxes and segmentation masks, and visualizes the detection results.
Urban Flood Mapping via HAND Model
This notebook conducts a hydrological analysis to estimate flood inundation potential across Pennsylvania using Digital Elevation Models (DEMs). It implements the Height Above Nearest Drainage (HAND) model, a terrain-based approach that normalizes topography relative to the nearest drainage network to simulate flood depths. The process involves downloading 30-meter DEM tiles, calculating the relative height for each pixel to estimate inundation depth, exporting the results as GeoTIFFs, and mosaicking them into a state-wide flood risk map.
Role
AI/ML Engineering
Tess Vu
Client
N/A
Keywords
CNN, Computer Vision, DEM, GPU Programming, Image Segmentation, LLM, Object Detection, Parallel Programming, RCNN, Raster Data, Remote Sensing, U-Net, Vector Data




other works







