This document covers the adoption of Amazon Web Services (AWS) technologies as HPWREN's core camera image IT infrastructure. It briefly describes the inception and growth of the University of California San Diego (UCSD) High Performance Wireless Research and Education Network (HPWREN) project, culminating around 2022 when AWS technology adoption was considered. Feasibility efforts began at that time in collaboration with UCSD's Qualcomm Institute (QI), a consortium member of the California Institute for Telecommunications and Information Technology (CALIT2). While individual AWS services have been considered earlier, the 2022 re-evaluation, along with the executive support from QI's director Ramesh Rao, led ultimately to a more directed and encompassing proof of concept activity. QI had an established collaboration with Xpertech, an AWS partner firm that specializes in setting up AWS environments. Using HPWREN image workflow requirements, Xpertech staff provided AWS infrastructure options and customized them for HPWREN specific workflows. These activities resulted in HPWREN staff becoming more familiar with, and, over time, using a wider range of AWS services. These efforts ultimately resulted in the current migration to AWS for HPWREN production support. The story of that evolution is the focus of this document.
In this section, a brief overview of the HPWREN Project is followed by a description of the state of its infrastructure around 2021 (subsequently referenced as the HPWREN "legacy" infrastructure or system). Having thus set the stage, the story of the AWS transition and adoption is described in the section that follows.
HPWREN originated as a wide-area wireless communications research project in 2000 and evolved over time into a substantial multi-agency collaboration with first responders and other entities across multiple California counties. HPWREN now supports regional wide area networks consisting of dozens of wireless mountain top communication links across Southern California. Eventually, the HPWREN network grew to support the hosting of hundreds of cameras and a multitude of weather sensors. Many earthquake and geophysical sensors were supported as well. These many sensors were deployed on or had access to the HPWREN network which allowed their data to be made available as general research, education, and public safety resources.
Early collaborations included Native American tribes via their education centers and TDVNet, the SDSU Santa Margarita Ecological Reserve as a biological field station, the Mount Laguna Observatory and Palomar Observatory for astronomy, and the California Wolf Center as another biological observatory.
The initial first responder agencies with which HPWREN collaborated were the California Department of Forestry and Fire Protection (CDF, now CAL FIRE) via their Emergency Command Center, as well as the Regional Communications System staff at the San Diego Sherriff's Department. In retrospect, without this assistance at their mountaintop locations, it would have been unlikely that HPWREN could have expanded to the scope it is today, nearly a quarter century later. Over time, collaborations also expanded to include San Diego Gas & Electric as well as various research groups at multiple academic institutions.
Currently, in addition to communications network infrastructure for research and education, HPWREN also provides wireless communications support for cameras and sensors, often in remote areas, through a radio based wireless networking system comprised typically of off-the-shelf equipment. This wide-area network now spans San Diego County and has been extended into other Southern California counties. With its radios often located at high elevations, many of the sites also support collocated cameras, weather stations with a high temporal resolution, and other sensors. All of the regional ANZA seismic sensors are also supported. HPWREN camera and other sensor data is openly published in near-real-time at https://www.hpwren.ucsd.edu/cameras. Seismic data is available at https://doi.org/10.7914/SN/AZ.
As early as 2002, HPWREN started to provide Internet access to first responder sites, which for more than the last decade grew to connectivity support of 60+ backcountry federal (US Forest Service), State (CAL FIRE) and San Diego County fire stations. Fire station network access and near-real-time camera image availability across San Diego County provide a significant public safety component to Emergency Responders as well as the larger community.
By 2022, the HPWREN infrastructure supported a collection of computing and networking hardware and software distributed across Southern California. Many, mostly mountain top, communications sites were all sending around-the-clock imaging and sensor data into computer centers at UCSD, San Diego Supercomputer Center, UCI, SDSU, and Saddleback College. Those centers contained the data processing, communications and storage infrastructures needed to support the constant flow of real-time data as well as the growing storage requirements for real-time and archival data access. The above wide spread Southern California networks and the dozens of servers spread across multiple data centers have been, and remain to this day, solely supported by a relatively small team of HPWREN IT professionals.
In summary, the legacy system was built and maintained by a small handful of HPWREN staff for many years, and remains so today. Given the growth to many hundreds of cameras across many remote Southern California sites, staff support was getting stretched to its limit. In addition, the increases in cameras and other sensors over time have led to the need for multiple independent processing pipelines to collect and process camera images and a multitude of servers, with ever increasing storage requirements, for their housing, publication and archival. In 2022, the effort to integrate a new camera into our workflow and [re]build the multitude of website files required the manual editing and subsequent execution of many files and scripts across many servers.
In late 2022 we built a collection of comma-separated text files and image pipeline control scripts to help automate the creation of many of those files. In short, going into our third decade of operations, we were maintaining a multitude of legacy servers (and VMs) with the prospect of needing at least a half dozen or more for further required storage expansion. By this time we had reached the hardware limits of storage expansion on our existing servers, had started backing up storage to shelved disks, had initiated manual cloud backups of older data, had begun efforts to decimate past image data to further reduce storage requirements, and were on the verge of adopting a self managed distributed storage system in the form of a CEPH cluster (the cluster itself depending on the addition of yet another half a dozen or so new servers spread across multiple data centers). And this new storage system alone would have done nothing to address the operational challenges of maintaining our complex image processing workflow.
At the time of the 2022 AWS investigations, HPWREN staff were already quite busy. They were managing and maintaining 400+ cameras along with a large collection of other sensors housed on dozens of mountain tops spread across multiple Southern California counties, as well as the wide area network infrastructure supporting their constant data flows. They were also maintaining HPWREN communications, processing and storage servers at multiple data centers, as well as planning for and implementing expansions to new sites, at times in new counties. At this stage, HPWREN was receiving near real-time imagery at a rate of 500 gigabyte per day, with the storage system, after image decimation, growing at a rate of about 100 GB per day.
The HPWREN website was an in-house design and implementation which maintained a similar look and feel over the years. Improved functionality was added incrementally as needed to support the requirements of its growing user community. It was the mechanism used by the community to observe camera and weather station data in near real time, as well as to access archival data and keep up with the latest HPWREN news articles. While there was desire to upgrade the site to allow management via more automated tools, more flexible access to past data and a more modern look and feel to better scale across various display sizes, other more immediate maintenance and expansion priorities seemed to always take precedence. Thus, in addition to our other manpower challenges, our website was getting past its prime ...
At the time of AWS consideration, HPWREN had nearly 400 TB of spinning storage and we were looking for additional storage options. We had already been decimating archival data to save space by turning sets of JPEG images into mp4 files to include inter-frame compression, and had also been offloading storage to various cloud and disk based offline backup mechanisms. As new data was growing at a rate of almost 500 Gigabyte per day, we were highly motivated to find additional on-line storage. We had experimented with and tentatively decided to pursue the establishment of a CEPH based distributed storage system, but that approach became infeasible due to its increasing demand on staffing for setting up and maintaining a half dozen more servers and operating an even more complex distributed storage management environment.
Our first foray into AWS services was to establish cloud storage "S3 buckets" which we could access using both the AWS web console and command-line (CLI) tools. We tested copying and accessing HPWREN data in bulk and found that to be relatively straight forward and reliable. After sufficient testing, we began adoption of AWS storage to start providing some relief from our archival storage pressures. At that stage, we did not yet have the tools to allow the general public to directly access AWS archival storage, and had to rely on a mix of our legacy system storage access mechanisms and some ad-hoc copying of any critically needed data. But we did get the immediate relief of an alternative (to CEPH) storage mechanism that we did not need to directly manage, and which provided storage redundancy and high availability access as part of its design. With this decision, we were able to take CEPH development off the task list and start to focus more on how AWS services could further benefit HPWREN infrastructure.
Further, we had been having some problems with the legacy hardware we were using for our image storage pipeline (in brief, we cycled image data from cameras, through real-time storage, to "medium term" storage from the last 30 days, and then into "long term" storage). We setup AWS storage buckets to hold these multiple groups of data, thus beginning to integrate AWS storage into the production pipeline. By this time, multiple legacy storage servers and an experimental partner's CEPH system were now removed from the pipeline, further reducing in-house storage dependency. We also started taking advantage of AWS storage classes, which allowed the trade-off of storage cost against time required to access near-line (as opposed to on-line) data. As of April, 2025, about 80% of our data was in the "Glacier Deep Archive" storage class. We use this AWS "cold" storage for the oldest data.
In parallel to these storage changes, we were also upgrading our per-camera cron based image processing pipeline from using physical legacy processing and storage servers to using updated scripting on new virtual acquisition machines. This now allowed the direct usage of AWS storage buckets for real-time, medium term, and long-term archival storage.
As of April, 2025, HPWREN was utilizing around 800 TB of AWS storage, duplicated across 2 regions, N. Virginia (US-East-1) and Oregon (US-West-2). This configuration ensures higher availability, rapid failover, and data redundancy, should one of our AWS locations fail or otherwise become unreachable. Other AWS storage advantages included reduced latency and improved performance for users closer to the remote region, and the opportunity for load balancing and traffic optimization.
Having mitigated storage pressures, we were now able to further integrate AWS storage by updating our web presentation environment to be hosted under AWS, thus allowing for direct image access via s3 buckets.
AWS can provide numerous services to support web hosting and development, web content management, and server side web processing. For a proof of concept, we started by porting the static portions of our website in late 2023, with both html code and image data residing in S3 standard storage. Our goals were initially to provide:
From there we tested various mechanisms for deeper integration with our image pipeline processing as well as the management of the dynamic portions of our website. These above goals and integrations have generally been accomplished (as of the release to production of 4/1/24).
A collection of AWS based tools were deployed initially to replicate the HPWREN website and camera image workflows for developing a new AWS hosted website and archival data access mechanism. Initially, static hosting allowed us to ease into the process of migrating various unchanging HPWREN website templates, getting used to the new tools for data transfer into S3 buckets and their built-in web publishing capabilities.
The mechanism for hosting the dynamic camera image and sensor data was developed similarly using a separate git repository in conjunction with csv files describing our sites and cameras, and a single python script which encapsulated the multitude of our legacy system's script functionalities (which were previously spread across many servers and scripts). This significantly reduced the overhead of adding a new camera images to the HPWREN website.
By adopting the AWS CodeCommit system, the HPWREN website now has incorporated version and team management ("git") control for the hpwren website data. This allows collaborations, version tracking, and backup capabilities. As HPWREN manages the constant growth of our camera and sensor deployments, AWS CodeCommit supports integration of the subsequent image pipeline processing. In our legacy system, web updates were always done by single individuals editing many dispersed datafiles, either manually or through various scripts. Now we have a mechanism supporting multiple updates in parallel, with significantly less effort.
A combination of AWS tools, consisting of CodePipeline and CodeBuild, provided support for testing our website updates on a private staging website prior to production deployment. Once the staging website is deemed acceptable, CodeBuild is again used for pushing those changes to the production website. CloudFront CDN is further helping minimize end-user latency to access the website.
Updating the HPWREN website imagery follows a customized AWS build pipeline. Stages in this pipeline automatically construct various derived data referenced by the website.
The above workflow is triggered by the CodeBuild option of the website deployment process (automatically via AWS EventBridge). The full website update pipeline is illustrated in the following image and explained below. It utilizes the CI/CD (continuous integration/continuous deployment) services of AWS.
The actual building of the updated website is performed by a suite of AWS services, including:
From a practical perspective, HPWREN developers use the above for website updating within the local context of a Linux environment (also possible within Windows 10 or Windows 11 Windows Subsystem for Linux 2 (WSL2) virtualization), using normal git repositories, pulling the latest website (hpwren-site repository) from AWS, making desired modifications, and testing locally using a local web server (possibly in a Docker instantiation) for reviewing changes. Once the local website is deemed acceptable, the local updated repository is then pushed to AWS and the above pipeline is triggered via AWS CodeBuild, deploying the changes to the AWS hosted staging site. Once confirmed on the staging site, CodeBuild is used again to deploy the changes onto the AWS production site. If greater detail is desired, the specific steps for making website changes and engaging the AWS pipeline processing can be found in the news article at https://www.hpwren.ucsd.edu/news/20240512/ .
Web data and their references are now generated using AWS Lambda functions, which are implemented in Python3 scripts. They provide:
Overall, at this time, the AWS architecture for HPWREN had evolved to the following:
During this stage of AWS adoption, in mid 2024, the structure of the HPWREN website still reflected its legacy user interface and its internal organization still relied on a content-heavy collection of static web pages with each camera having its own subtree of html files providing the various image access options. This internal website organization required the maintenance and overhead of many hundreds of static html file trees. Around this time, members of the QI team started to envision a cleaner user interface which was to be implemented in stages and reflect a simpler internal data organization. Instead of every camera having its own static tree of html access files, the new user interface would just use a single backend python script. This server side script would read the HPWREN csv camera files and build dynamic paths to image data as requested by the user. A new GUI was on the horizon.
As alluded in the previous section, a new HPWREN graphical user interface (GUI) was under development since Summer 2024. It started out as our HPWREN experimental interface and was soon announced to users as an available alternative interface. This allowed us to solicit user feedback on the new interface as well as receive suggestions for additional features. After more than half a year of iterations, we eventually adopted this new interface as the new default GUI in March 2025.
Throughout 2024, along with the various AWS related HPWREN infrastructure improvements, discussions were taking place concerning features to improve the user experience and utility of the website. There was also interest in simplifying the website organization and minimizing the number of html files needed to access camera image data. Reducing website complexity was also a goal for improving the efficiency of camera image and sensor data access and management.
Through the years of operating HPWREN, we had been collecting desired user interface features yet we focused on the infrastructure challenges for data acquisition and storage. In July, 2024, a high-school student with advanced programming skills, Daniel Farcas, volunteered to implement a new GUI for HPWREN. The new implementation ended up rewriting the entire image management and access mechanisms using client side functionality, embodied in JavaScript code. The resulting new interface was born following an iterative process with discussions around the utility of the new features, accessibility, and performance. During this time, updates were taking place regularly. These efforts culminated in the adoption of the new website user interface in March 2025 and new updates are continuing throughout this year.
The map is interactive, multi-layered (with map, terrain and satellite options) and tightly integrated with the image presentation interface. It also allows dropping pins anywhere which then highlights the cameras that have line of site to the respective pin locations. Furthermore, this new interface integrates both the legacy 3 hour time-lapse image videos and the powerful Interactive Image Flow Interface (IIFI).
Web data and their references are now generated using AWS Lambda functions, which are implemented in Python3 scripts. They provide:
Overall, navigation to specific camera sites is now a single click, either on the map or in the displayed camera images below the map. The resulting camera site page has selection options and controls for all the various real-time, historical images, and videos for that site. This multitude of options allows for specifying cameras, dates, times, quavers (1/8 day 3 hour videos), 360 degree aggregate views and various visual inspection and overlay options. From any site's page, there are pull down menus for selecting any other site, and thus options to select and aggregate cameras from multiple sites for a custom collection of imagery. There is also a kiosk mode for displaying customizable slide-shows of multiple real-time camera images.
When displaying specific images, there are now visual options to adjust brightness, contrast and saturation, as well as image correction options to adjust for camera lensing effects and directional (angle of view) indicators in a heads-up format. Variable brightness image overlays are also available so that changes between images can more easily be detected, or so that night time images can be highlighted with daytime images thus pinpointing specific locations in dimmer images.
Images, videos and image files lists can be downloaded or shared, as can custom mixtures from multiple cameras and sites. This includes current or recent (up to 90 day old) imagery. Older image data, as of May 2025, must first be restored from the AWS Glacier archives prior to exploration or downloading. This tiered storage capability can save us storage costs over the longer term while still allowing archival access to even the oldest data.
Weather plots are displayed in custom view if one or more cameras are selected from a single site. These plots provide a site-level view of the weather data across time.
Currently, the plots show wind speed (min, max, average), temperature, and relative humidity, all on the same plot.
And, as seen at the image beginning of this section and the image below, weather data is also integrated with and visible on the map.
To most effectively serve our ~3K daily users (which can grow to more than 20,000 users during major fire events), much attention has been placed on website capabilities, data access patterns, controls performance, and GUI refinements to optimize functionality and ease of usage.
As of June 2025, the HPWREN image acquisition pipeline initially involves a half dozen VM's which run cron driven scripts every other minute and in some cases every 10 seconds) to trigger interleaved camera fetch scripts for some 400 + camera imagers spread across southern California. Timing for the related intermediate image processing is critical to ensure that it can be done within 60 (or 10) second time frames to insure there are no lost image captures.
HPWREN has been collecting imagery since the early 2000s. Although many Terabytes of imagery are available, accessing imagery older than 30 days requires resurrecting older imagery from AWS Glacier Deep Archive storage, an operation currently only available with manual actions of HPWREN staff. Further, older imagery is stored in a compressed mp4 video format, requiring additional efforts to access individual images, should the video format be insufficient. We are seeking mechanisms at this time which would allow more automated, user driven, ability to retrieve and reformat as needed desired older imagery.
Supporting stereoscopic and panoramic views has been an HPWREN goal for some time. Recent advances in VR technologies and lower cost consumer devices has also motivated efforts to make HPWREN imagery suitable for taking advantage of such emerging capabilities. There have been a multitude of technical challenges to overcome for making progress on and refinements to these applications, especially where VR is concerned.
To ensure stitched 360° panorama that is compatible with VR headsets (such as the Meta Quest 3, for example), the equirectangular projection must adhere to VR-specific formatting. This involves mapping the stitched image into a monoscopic or stereoscopic 360° layout (fisheye correcting the individual images and reprojecting onto a spherical area, typically 8192x2048 resolution for high clarity) and embedding spatial metadata (e.g., projection type, orientation) so browsers or VR apps recognize it as a navigable environment. For Quest 3, the image must also be optimized for low-latency streaming, as VR requires high frame rates (72-90Hz) to prevent motion sickness. Tools like WebGL, WebXR or frameworks like A-Frame can render the panorama in-browser, but color grading and dynamic range adjustments are critical to avoid visual fatigue in VR. Additionally, the stitched image must avoid seams near "pole" regions (top/bottom), where Quest 3 users frequently look upward or downward, requiring careful alignment of the north/south camera feeds during preprocessing. Finally, testing on-device is essential to validate spatial continuity and responsiveness to head movements.
The next section discusses the challenges for proper alignment of cameras to support both panorama views and VR viewing.
HPWREN fixed cameras are physically mounted on towers as close as possible to being centered 90 degree apart (nominally pointing North, South, East and West). Aligning these cameras with ~90° fields of view to create a seamless 360° panorama for browser-based viewing involves several technical challenges.
First, geometric alignment is critical: even minor mounting misalignments (tilt, height differences, or angular offsets) disrupt the continuity of the stitched image, leaving gaps or overlapping seams. Parallax errors further complicate this, as objects at varying distances (e.g., trees, towers, buildings) appear misaligned if the cameras lack a shared optical center, a near-impossible feat with physically separated outdoor units.
Second, exposure and color consistency across cameras must be tightly controlled. Outdoor lighting conditions (e.g., shadows, glare, or changing sunlight) can cause mismatched brightness, contrast, or white balance, creating visible discontinuities in the final panorama.
Third, lens distortion (e.g., fisheye or barrel distortion in wide-angle lenses) and differences in focal lengths require precise software correction to ensure edges align smoothly. For browser display, real-time processing adds complexity: stitching must be optimized for low latency, often requiring GPU acceleration or pre-rendered equirectangular projections. Tools like WebGL or libraries like Three.js can help, but calibration data (e.g., camera intrinsics, mounting offsets) must be meticulously mapped to avoid artifacts.
Solutions often involve rigid mounting hardware with adjustable brackets (to fine-tune angles), synchronized camera settings (manual exposure, fixed ISO), and post-processing pipelines that apply homography transformations or use feature-matching algorithms (e.g., SIFT) to align frames. For outdoor durability, weatherproof enclosures and stabilization against wind/vibration are also essential to maintain alignment over time.
Engaging with Nadir Weibel, Director, UCSD Human Centered Design Lab, ( https://designlab.ucsd.edu/ ), HPWREN hopes to collect feedback from surveys and focus groups to gain significant insights into interface usability, accessibility, and streamlined user workflows.
Collections of many thousands of camera images, gathered from mountaintop installations and spanning both real-time streams and historical archives, represent a powerful foundation for advancing analytic capabilities in wildfire monitoring and environmental research. These panoramic and high-frequency visual datasets enable continuous observation across vast landscapes, supporting efforts to detect, track, and understand fire events. With access to both live and archival imagery, analysts and responders can monitor changing conditions, document fire progression, and assess the effectiveness of interventions over time.
From an analytics and machine learning standpoint, this wealth of imagery opens up numerous possibilities. Automated systems can be developed to identify early signs of wildfire, such as smoke or unusual heat patterns, by training algorithms on large, labeled datasets. The temporal depth of the archives allows for the study of fire behavior trends, the refinement of detection models, and the validation of predictive tools. By combining camera data with other sources (like weather or satellite information), researchers can create more robust analytic frameworks for risk assessment, environmental impact studies, and resource planning. Additionally, the scale and diversity of the dataset make it suitable for testing new approaches in computer vision and data fusion, potentially leading to more accurate and timely fire detection and broader applications in environmental science.
We were driven by the double edge'd sword of diminishing resources (server resources and manpower) relative to the evolution and expansion of our camera networks, on one hand, and the ongoing desire to provide a more modern, usable, and powerful website interface to our user community. The real challenge was in the [re]balancing of available resources to leverage current cloud based technologies and develop them in a way that we could sustain.
In our case, HPWREN benefited greatly from our collaboration with QI, Xpertech and AWS Solutions Architects partners. They worked with us during numerous iterations of design and review as we adopted various AWS technologies and website feature updates.
Ultimately, we will take over operations of our new AWS environment and our collaborations have helped us to learn as we go the needed best practices, both technical and financial.
HPWREN core team - Frank Vernon, Hans-Werner Braun, Geoffrey Davis, Adam Brust, Jim Davidson, Gregory Hidley
QI - system architecture, data models, website design and implementation: Ramesh Rao (Director), Emilia Farcas (QI Research Scientist), Daniel Farcas (QI Volunteer).
Xpertech - AWS infrastructure setup: Jit Bhattacharya, Aman Singh.
AWS – technical support: John Apiz (Solutions Architect), Patrick Shea (Account Manager).
See also the following news articles for additional background information.