John Croft reports on how FAA weather researchers are working to put more smarts in weather cameras to give pilots in remote areas a better way to establish visibility at airports and en route waypoints.
Cameras can be a vital tool to give pilots in remote areas a firsthand look at the real-time visibility at remote airports and en route waypoints. But can those cameras be turned into weather sensors?
FAA weather researchers think the answer is yes. In a new research project, planned to go live during 2019 in Alaska, they will fuse human intelligence and automation in an attempt to evolve cameras into measurable visibility sensors to improve weather situational awareness as well as forecast accuracy.
For pilots, the information will make for safer flights; they will see the estimated visibility conditions along terrain-challenged routes in one glance on a weather application. By delivering more observation information, the project also could provide forecasters additional data to boost the reliability of localised predictions.
The work is part of the broader FAA NextGen Weather Program, which includes Weather Technology in the Cockpit (WTIC) research and the Aviation Weather Research Program (AWRP).
WTIC uses NextGen information and surveillance technologies to deliver enhanced weather information to pilots in the cockpit. AWRP is an effort to explore and develop ways to improve the weather information that supports decision making in the National Airspace System.
In Alaska, where weather can deteriorate quickly, general aviation pilots could use help from both programmes. There are relatively few manned or automated weather observation stations available for real-time weather and localised forecasts in the state.
In 2008, the FAA began installing weather cameras in the Alaskan wilderness. Each site typically has four cameras pointing in the cardinal directions. There are around 300 camera sites and about 1,000 individual cameras typically providing new images every 10 minutes that pilots can access through the FAA’s AVCamsPlus website (https://avcamsplus.faa.gov).
The FAA cautions that the camera images are to be used for situational awareness only, not for the regulatory visibility minima to start or complete certain flights. For example, to launch on a visual flight rules (VFR) flight, the pilot must make sure the visibility is at least 3 miles. The cameras augment 39 Automated Surface Observing System (ASOS) weather stations that provide pilots with a variety of information over audio or digital links.
ASOS measures, among other things, wind speed, temperature, pressure and dew point, and — perhaps most important to pilots — the ceiling and visibility they can expect. ASOS measurements are considered the ‘truth model’ for visibility although the system measures air clarity rather than how far a pilot actually can see, according to the National Weather Service (NWS). The sensor measures visibility in the horizontal plane and cloud ceiling – clear, scattered, broken or overcast – in the vertical plane.
Apps that pilots use for navigation or flight planning typically have a Flight Category option that graphically shows visibilities at reporting stations with ASOS or similar automated weather stations. It uses one of four color codes: purple is the poorest visibility – low instrument flight rules (IFR); green is the best – VFR; red indicates IFR conditions, and blue indicates marginal VFR.
If the map shows all the stations along your route in green, that’s good.
At the numerous camera locations without ASOS in Alaska, pilots can refer to AVCamsPlus and click on individual cameras to see the most recent views. To estimate visibility, they compare the live view with a stored view taken in clear weather and annotated with distance markers to terrain features.
Viewing camera-by-camera through a route during a preflight, repeated every so often, is time-consuming and workload-intensive. However, with visibility estimates from cameras, a pilot could easily see all the information on a Flight Category-type map before and during a flight. That’s what FAA WTIC researchers and Rockwell Collins aim to do by evaluating visibility at a subset of camera sites, using human observers hired through Amazon Mechanical Turk (MTurk), a crowdsourcing site where participants are paid to evaluate tasks assigned via the internet.
During the study in the summer of 2017, researchers posted camera images on the crowdsourcing site. Paid observers estimated visibility based on actual images versus annotated good-weather images. Researchers learned it took 8–10 observers looking at an image to come to a consensus, or crowd solution, on the visibility.
A WTIC-developed algorithm rates Amazon MTurk observers based on how closely their answers matched crowd solutions. WTIC programme manager Gary Pokodner said 80 per cent of crowdsourced visibility results matched the ASOS visibility to within 20 per cent — a positive result. In many cases, results that varied by more than 20 per cent were due to camera placement that caused obstructed views.
In a separate study, AWRP researchers had a different goal in mind for the cameras: using image-processing and edge-detection algorithms developed by the MIT Lincoln Laboratory to determine visibility at remote sites to generate more accurate localised forecasts. Forecasters already use data from traditional ground weather sensors and from satellites in their prediction models; the visibility estimates from cameras will provide additional data to improve their forecasts.
“Better observations, better forecasts,” says Jenny Colavito, the ceiling and visibility project lead for AWRP. “But in order to use the cameras, we have to digitise their output. Right now, it’s just an image, so we have to extract a visibility estimate using automation.”
When Colavito learned of the WTIC crowdsourcing project, she saw an opportunity. “This year we are working together with WTIC to create a hybrid,” she says. The idea is to gather crowdsourced visibility estimates through MTurk and to insert the Lincoln Laboratory-developed automation as one member of the crowd. “What we’re hoping is that our automation is going to be equivalent to a high-achieving worker for converging on a solution,” says Colavito. “If you have someone who’s really good, then you might only need one other person to verify that you’re correct.”
The estimates from the crowd can also be used to improve the automation through machine-learning techniques. Ultimately, the goal is to perfect the automation to the point that it could be used as a standalone measure of visibility, allowing for near-real-time interpretation of the visibility and negating the need for a set of human eyes to weigh in — or at least minimising the size of the crowd. Determining the connection between camera visibility, ASOS visibility and what a pilot really needs to know to fly safely are hurdles to overcome. “We have found cases where there is a discrepancy between the ASOS reading and what we see in the camera imagery,” says Colavito.
While ASOS is considered the meteorological ground truth, this summer, Pokodner and his team will study how best to define the visibility measure most useful to a pilot, the ‘aviation visibility’, and what additional information human viewers might be able to add, such as ‘mountains not visible’.
If testing goes as planned, Colavito says her group will begin to integrate the camera visibility data into their gridded analysis models and will work with the NWS to integrate the data into NWS numerical weather prediction models. Pokodner says WTIC researchers will study how to set triggers in the edge-detection algorithms to indicate when weather is changing – the ideal time to initiate crowdsourcing of images.