In recent decades, global climate monitoring has made significant strides, leading to the creation of new, extensive observational datasets (Karpatne et al., 2019). These datasets are essential for improving numerical weather predictions and refining remote sensing retrievals by providing detailed insights into complex physical processes (Alizadeh, 2022). However, as the quantity and complexity of the data grows, identifying patterns within the observations becomes increasingly challenging (Zhou et al., 2021). Extracting key features from these datasets could lead to important advancements in our understanding of phenomena like convection and precipitation, further enhancing our knowledge of the changing global climate.
In this post, we will explore some of these complex data patterns through the lens of precipitation, which has been highlighted as a critically important area of study under warming global temperatures (IPCC, 2023). Rather than relying on randomly generated or simulated data for this project, we will work with real-world observations from across the globe, which are publicly accessible for you, my reader, to explore and experiment with as well. Let this post serve as a research guide, starting with the importance of good quality data, and concluding with insights on linear and nonlinear interpretations of said data.
If you’d like to follow along with some code, check out our interactive Google Colab notebook.
This analysis unfolds in three parts, each of which is a separate, published research article:
- Curating a robust, multidimensional dataset
- Analyzing linear embeddings
- Exploring nonlinear features
1. The Microphysical Dataset
https://doi.org/10.1029/2024EA003538
When we talk about understanding the features of precipitation, what are we really asking? How complex can something as common as rain or snow be? It’s easy to look outside on a stormy day and say, “It’s raining” or “It’s snowing”. But what’s actually happening in those moments? Can we be more precise? For example, how intense is the rainfall? Are the raindrops large or small? If it’s snowing, what do the snowflakes look like ? Are they fluffy, dendritic crystals, or are they composed of multiple, fused particles in large aggregate clumps (e.g., Fig. 1)? If the temperature hovers near zero degrees Celsius (C), do the snowflakes become dense and slushy? How fast are they falling? These differences could have a big impact on what happens when the particles reach the ground, and categorizing these processes into distinct groups is non-trivial (Pettersen et al., 2021).
Understanding these processes is crucial for better monitoring and mitigating the impacts of flooding, runoff, freezing rain and extreme precipitation, all of which are potentially dangerous events with billions of dollars of associated global damages each year (Sturm et al., 2017). But with thousands of particles falling over just a few square meters in a matter of minutes, how do we quantify this complex process? It’s not just about counting the particles, we also need to capture key characteristics like size and shape. Instead of attempting this manually (an impossible task, go try for yourself), we typically rely on remote sensing instruments to do the heavy lifting. One such tool is the NASA Precipitation Imaging Package (PIP), a video disdrometer that provides detailed observations of falling rain drop and snow particles (Pettersen et al., 2020), as shown below in Fig. 2.
This relatively inexpensive instrument consists of a 150-watt halogen bulb and a high-speed video camera (capturing at 380 frames per second) positioned two meters apart (King et al., 2024). As particles fall between the bulb and the camera, they block the light, creating silhouettes that can be analyzed for variations in size and shape. By tracking the same particle across multiple frames, the PIP software can also determine its fall speed (Fig. 3). With additional assumptions about particle motion in the air, the PIP data allow us to also derive minute-scale particle size distributions (PSDs), fall speeds, and effective particle density distributions (Newman et al., 2009). These microphysical measurements, when combined with nearby meteorological observations of surface variables like temperature, relative humidity, pressure, and wind speed, offer a comprehensive snapshot of the environment at the time of observation.
Over a span of 10 years, we collected more than 1 million minutes of particle microphysical observations, alongside collocated surface meteorological variables, across 10 different sites (Fig. 4). Gathering data from multiple regional climates over such a long period was crucial to building a robust database of precipitation events. To ensure consistency, all microphysical observations were recorded using the same type of instrument with identical calibration settings and software versions. We then conducted an extensive quality assurance (QA) process to eliminate erroneous data, correct timing drifts, and remove any unphysical outliers. This curated information was then standardized, packaged into Network Common Data Form (NetCDF) files, and made publicly available through the University of Michigan’s DeepBlue data repository.
You are welcome to download and explore the dataset yourself! For more details on the sites included, the QA process, and the microphysical differences observed between locations, please refer to our associated data paper published in the journal of Earth and Space Science.
To describe the PSD, we calculate a pair of parameters (n0, and λ) representing the intercept and slope of an inverse exponential fit (Eq. 1). This fit was selected as it has been extensively used in previous literature to accurately describe snowfall PSDs (Cooper et al., 2017; Wood and L’Ecuyer, 2021). However, other fits (e.g., a gamma distribution) could also be considered in future work to better capture large aggregate particles (Duffy et al., 2022).
n0-λ joint 2D histograms are shown below in Fig. 5 for each site, demonstrating the wide variety of precipitation PSDs occurring across different regional climates. Note how some sites display bimodal distributions (OLY) compare to very narrow distributions at others (NSA). We have also put together a Python API for interacting with and visualizing this data called pipdb. Please see our documentation on readthedocs for more information on how to install and use this package for your own project.
In summary, we’ve compiled a high-quality, multidimensional dataset of precipitation microphysical observations, capturing details such as the particle size distributions, fall speeds, and effective densities. These measurements are complemented by a range of nearby surface meteorological variables, providing crucial context about the specific types of precipitation occurring during each minute (e.g., was it warm out, or cold?). A full list of the variables we’ve collected for this project is shown in Table 1 below.
Now, what can we do with this data?
2. Examining Linear Embeddings with PCA
https://doi.org/10.1175/JAS-D-24-0076.1
With our data collected, it’s time to put it to use. We begin by exploring linear embeddings through Principal Component Analysis (PCA), following the methodology of Dolan et al. (2019). Their work focused on uncovering the latent features in rainfall drop size distributions (DSDs), identifying six key modes of variability linked to the physical processes that govern drop formation across a variety of locations. Building on this, we aim to extend the analysis to snowfall events using our custom dataset from Part 1. I won’t delve into the mechanics of PCA here, as there are already many excellent resources on TDS that cover its implementation in detail.
Before applying PCA, we segment the full dataset into discrete 5-minute intervals. This segmentation allows us to calculate the PSD parameters with a sufficiently large sample size. We then filter these intervals, selecting only those with effective density values below 0.4 g/cm³ (i.e., values typically associated with snowfall and characterized by less dense particles). This filtering results in a dataset of 210,830 five-minute periods ready for analysis. For the variables used to fit the PCA, we choose a subset from Table 1 related to snowfall, derived from the PIP. These variables include n0, λ, Fs, Rho, Nt, and Sr (see Table 1 for details). We focused on a smaller subset of observations from the disdrometer alone here, because future sites might not have collocated surface variables and we were interested in what could be extracted from just this six-dimensional dataset.
Before diving into the analysis, it’s important to first inspect the data to ensure everything appears as expected. Remember the old GIGO addage, garbage in, garbage out. We want to mitigate the impact of bad data if we can. By examining the value distributions of each variable, we confirmed they fall within the anticipated ranges. Additionally, we reviewed the covariance matrix of the input variables to gain some preliminary insights into their joint behavior (Fig. 6). For instance, variables like n0 and Nt, both tightly coupled to the number of particles present, show high correlation as expected, while variables like effective density (Rho) and Nt display less of a relationship. After scaling and normalizing the inputs, we proceed by feeding them into scikit-learn’s PCA implementation.
Applying PCA to the inputs results in three Empirical Orthogonal Functions (EOFs) that together account for 95% of the variability in the dataset (Fig. 7). The first EOF is the most significant, capturing approximately 55% of the dataset’s variance, as evidenced by its broad distribution in Fig. 6.a. When examining the standard anomalies of the EOF values for each input variable, EOF1 shows a strong negative relationship with all inputs. The second EOF accounts for about 20% of the variance, with a slightly narrower distribution (Fig. 6.b) and is most strongly associated with the Fallspeed and Rho (density) inputs. Finally, EOF3, which explains around 15% of the variance, is primarily related to λ and snowfall rate variables (Fig. 6.c).
On their own, these EOFs are challenging to interpret in physical terms. What underlying features are they capturing? Are these embeddings physically meaningful? One way to simplify the interpretation is by focusing on the most extreme values in each distribution, as these are most strongly associated with each EOF. While this manual clustering approach leaves much of the distribution near the origin ambiguous, it allows us to separate the data into distinct groups that can be analyzed more closely. By applying a σ > 2 threshold (represented by the thin white dashed lines in Fig. 6.a-c), we can divide this 3D distribution of points into six distinct groups of equal sampling volume. Since visualizing this separation in 2D is particularly challenging, we’ve provided an interactive data viewer (Fig. 8), created with Plotly, to make this distinction clearer. Feel free to click on the figure below to explore the data yourself.
With the most extreme EOF clusters selected, we can now plot these in physical variable spaces to begin interpreting them. This is demonstrated in Fig. 9 across different variable spaces: n0-λ (panel a), Fs-Rho (panel b), λ-Dm (panel c), and Sr-Dm (panel d). Starting with the red and blue clusters in Fig. 9.a (representing the positive and negative EOF1 values), we see a clear separation in n0-λ space. The red cluster, characterized by a high PSD intercept and slope, indicates a high-intensity grouping, suggestive of many small particles, while the blue cluster shows the opposite behavior. This is indicative of a potential intensity embedding.
In panel b, there’s a distinct separation between the purple and light blue clusters (corresponding to the positive and negative EOF2 values). The purple cluster, associated with high fall speed and density, contrasts with the light blue cluster, which shows the opposite characteristics. This pattern likely represents a particle temperature/wetness embedding, describing the “stickiness” of the snow as it falls. Warmer, denser particles (such as partially melted or frozen particles) tend to fall faster, much like how a slushy pellet falls faster than a dry snowflake.
Finally, in panels c and d, the yellow and magenta clusters are separated based on PSD slope and mass-weighted mean diameter. While less clear, this suggests a potential relationship with particle size and the underlying snowfall regime, such as differences between complex shallow systems and deep systems.
Another way to strengthen our confidence in these attributions is by comparing the groups to independent observations. We can do this by cross-referencing the PCA-based snowfall classifications from the PIP with nearby surface radar observations (i.e., a Micro Rain Radar) and reanalysis (i.e., ERA5) estimates to evaluate physical consistency. This is one reason we recommend not always using all available data in the dimensionality reduction, as it limits the ability to later assess the robustness of the embeddings. To validate our approach, we examined a series of case studies at Marquette (MQT), Michigan, to see how well these classifications align. For instance, in Fig. 10, we observe a transition from a high-intensity snowstorm (red) to partially melted mixed-phase snow crystals (sleet) as temperatures briefly rise above zero degrees C (panel h), and then back to high-intensity snow as temperatures drop below zero later in the day. This also aligns with the changes we see in reflectivity (panel a) and we can see this transition in the n0-λ plot in panel i.
Building on our PCA analysis and the consistency observed with collocated observations, we also created Fig. 11, which summarizes how the primary linear embeddings identified through PCA are distributed across different physical variable spaces. These classifications offer critical microphysical insights that can enhance a priori datasets, ultimately improving the accuracy of state-of-the-art models and snowfall retrievals.
However, since PCA is limited to linear embeddings, this raises an important question: are there nonlinear patterns within this dataset that we have yet to explore? Additionally, what new insights might emerge if we extend this analysis beyond snow to include other types of precipitation?
Let’s tackle these questions in the next section!
3. Nonlinear dimensionality reduction using UMAP
https://doi.org/10.1126/sciadv.adu0162
In order to examine more complex, nonlinear embeddings, we need to consider a different type of unsupervised learning that loosens the linearity assumptions of techniques like PCA. This brings us to the concept of manifold learning. The idea behind manifold learning is that high-dimensional data often lie on a lower-dimensional, curved manifold within the original data space (McInnes et al., 2020). By mapping this manifold, we can uncover the underlying structure and relationships that linear methods might miss. Techniques like t-SNE, UMAP, VAEs, or Isomap can reveal these intricate patterns, providing a more nuanced understanding of the dataset’s latent features. Applying manifold learning to our dataset could uncover nonlinear embeddings that further distinguish precipitation types, potentially offering even deeper insights into the microphysical processes at play. As a little hint as to what is to come, see Fig. 12. As mentioned before, I won’t go into the implementation details of such methods, as this has been covered many times here on TDS.
Additionally, we would like to use our entire dataset of both disdrometer observations and collocated surface metoeologic variables this time around to see if the additional dimensions provide useful context for better differentiating between highly complex physical processes. For example, can we detect different types of mixed-phase precipitation if we knew more about the temperature and humidity at the time of observation? So, unlike the previous section where we limited the inputs to just PIP data and just snowfall, we now include all 12 dimensions for the entire dataset. This also somewhat reduces our total sample down to 128,233 5-minute periods at 7 locations, since not all sites have operating surface meteorologic stations to pull data from. As is always the case with these types of problems, as we add more dimensions, we run up against the dreaded curse of dimentionality.
As the dimensionality of the feature space increases, the number of configurations can grow exponentially, and thus the number of configurations covered by an observation decreases — Richard Bellman
This tradeoff in the number of inputs and feature sparsity is a challenge we will have to keep in mind moving forward. Luckily for us, we only have 12 dimensions which may seem like a lot, but is really quite small compared to many other projects in the Natural Sciences with potentially thousands of dimensions (Auton et al., 2015).
As mentioned earlier, we explored a variety of nonlinear models for this phase of the project (see Table 2). In any Machine Learning (ML) project we undertake, we prefer to start with simpler, more interpretable methods and gradually progress to more sophisticated techniques, as less complex approaches are often more efficient and easily understood.
With this strategy in mind, we began by building on the results from Part 2, using PCA once again as a baseline for this larger dataset of rain and snow particles. We then compared PCA to nonlinear techniques such as Isomap, VAEs, t-SNE, and UMAP. After conducting a series of sensitivity analyses, we found that UMAP outperformed the others in producing clear embeddings in a more computationally efficient manner, making it the focus of our discussion here. Additionally, with UMAP’s improved global separation of data across the manifold, we can move beyond manual clustering, employing a more objective method like Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to group similar cases together (McInnes et al., 2017).
Applying UMAP to this 12-dimensional dataset resulted in the identification of three primary latent embeddings (LEs). We experimented with various hyperparameters, including the number of embeddings, and found that, similar to PCA, the first two embeddings were the most significant. The third embedding also displayed some separation between certain groups, but beyond this third level, additional embeddings provided little separation and were therefore excluded from the analysis (although these might be interesting to look at more in future work). The first two LEs, along with a case study example from Marquette, Michigan, illustrating discrete data points over a 24-hour period, are shown below in Fig. 13.
Immediately, we notice several key differences from the previous snowfall-focused study using PCA. With the addition of rain and mixed-phase data, the first and second empirical orthogonal functions (EOF1 and EOF2) have now swapped places. The primary embedding now encodes information about particle phase rather than intensity. Intensity shifts to the second latent embedding (LE2), remaining significant but now secondary. The third LE still appears to relate to particle size and shape, particularly within the snowfall portion of the manifold.
Applying HDBSCAN to the manifold groups generated by UMAP resulted in nine distinct clusters, plus one ambiguous cluster (Fig. 13.a). The separation between clusters is much clearer compared to PCA, and these groups seem to represent distinct physical precipitation processes, ranging from snowfall to mixed-phase to rainfall at various intensity levels. Interestingly, the ambiguous points and the connections between nodes in the graph form distinct pathways of particle habit evolution. This finding is particularly intriguing as it outlines clear particle evolutionary pathways, showing how a raindrop can transform into a frozen snow crystal under the right atmospheric conditions.
A real-world example of this phenomenon is shown in Fig. 13.b, observed in Marquette on February 15, 2023. Each colored ring represents an individual (5-minute) data point throughout the day, with an arrow indicating the direction of time. In Fig. 13.c, we overlay ancillary radar observations with surface temperatures. Up until around 12:00 UTC, a clear brightband in reflectivity can be seen at approximately 1 km, indicative of a melting layer where temperatures are warm enough for snow to melt into rain. This period was correctly classified as rainfall using our UMAP+HDBSCAN (UH) clustering method. Then, around 17:00 UTC, temperatures rapidly dropped well below freezing, leading to the classification of particles as mixed-phase and eventually as snowfall. These types of tests are critically important for making sure what your manifold shape suggests makes sense physically.
If you’d like to explore this manifold yourself, examining different sites and seeing how various variables map to the embedding, check out our interactive data analysis tool, or click Fig. 14 below.
When you explore the tool mentioned above, you’ll notice that mapping various input features to the manifold embedding results in smooth gradients. These gradients indicate that the general global structure of the data is likely being captured in a meaningful way, offering valuable insights into what the embeddings are encoding.
Comparing the separation of points using UMAP to that of PCA (where PCA is applied to the exact same dataset as UMAP) reveals significantly better separation with UMAP, especially concerning precipitation phase. While PCA can broadly distinguish between “liquid” and “solid” particles, it struggles with the more complex mixed-phase particles. This limitation is evident in the distributions shown in Fig. 15.d-e. PCA often suffers from variance overcrowding near the origin, leading to a tradeoff between the number of clusters we can identify and the size of the ambiguous supercluster. Although HDBSCAN can be applied to PCA in the same manner as UMAP, it only generates two clusters (rain and snow) which isn’t particularly useful on its own, and can be achieved with a simple linear threshold. In contrast, UMAP provides much better separation, resulting in 37% fewer ambiguous points and a +0.14 higher silhouette score for the clusters compared to PCA (0.51).
As we did previously with PCA, we can conduct a series of case study comparisons when using UMAP to reinforce our physical cluster attributions. By comparing these with collocated MRR observations, we can assess whether the conditions reported in the astmosphere above the PIP align with the attributions produced by the UH clusters, and how these compare to the clusters from PCA. In Fig. 16 below, we examine a few of these cases at Marquette.
In the first column (a), we present an example of a prolonged mixed-phase event, emphasizing LE1, which we know occurred at MQT from recorded weather reports. Along the top panel, both PCA and UMAP identify the period up until 19:00 UTC as rain. However, after this period, the PCA groupings become sparse and largely ambiguous, whereas UMAP successfully maps the post-19:00 UTC period as mixed-phase, distinguishing between wet sleet (green) and colder, slushy pellets (purple).
In panel (b), we highlight a case focusing on intensity changes (LE2), where conditions shift from high-intensity mixed-phase to low-intensity mixed-phase, and then back to high-intensity snowfall as temperatures cool. Again, UMAP provides a more detailed and consistent classification compared to the sparser results from PCA.
Finally, in panel (c), we explore an LE3 case involving a shallow system until 15:00 UTC, followed by a deep convective system moving over the site, leading to an increase in the size, shape complexity, and intensity of the snow particles. Here too, UMAP demonstrates a more comprehensive mapping of the event. Note that these are only a few handpicked case studies however, and we recommend checking out our full paper for multi-year comparisons.
Overall, we found that the nonlinear 3D manifold generated using UMAP provided a smooth and accurate approximation of precipitation phase, intensity, and particle size/shape (Fig. 17). When combined with hierarchical density-based clustering, the resulting groups were distinct and physically consistent with independent observations. While PCA was able to capture the general embedding structure (with EOFs 1-3 largely analogous to LEs 1-3), it struggled to represent the global structure of the data, as many of these processes are inherently nonlinear.
So what does this all mean?
Conclusions
You’ve made it to the end!
I realize this has been a lengthy post, so I’ll keep this section brief. In summary, we’ve developed a high-quality dataset of precipitation observations from multiple sites over several years and used this data to apply both linear and nonlinear dimensionality reduction techniques, aiming to learn more about the structure of the data itself! Across all methods, embeddings related to particle phase, precipitation intensity, and particle size/shape were the most dominant. However, only the nonlinear techniques were able to capture the complex global structure of the data, revealing distinct precipitation groups that aligned well with independent observations.
We believe these groups (and particle transitionary pathways) can be used to improve current satellite precipitation retrievals as well as numerical model microphysical parameterizations. With this in mind, we have constructed an operational parameter matrix (the lookup space is illustrated in Fig. 18) which produces a smooth conditional probability vector for each group based on temperature (T) and particle counts (Nt). Please see the associated manuscript for access/API details to this table.
Nonlinear dimensionality reduction techniques like UMAP are still relatively new and have yet to be widely applied to the large datasets emerging in the Geosciences. It should be noted that these techniques are imperfect, and there are tradeoffs based on your problem context, so keep that in mind. However, our findings here, building first on PCA, suggest that these techniques can be highly effective, emphasizing the value of carefully curated and comprehensive observational databases, which we hope to see more of in the coming years.
Thanks again for reading, and let us know in the comments how you are thinking about learning more from your large observational datasets!
Data and Code
PIP and surface meteorologic observations used as input to the PCA and UMAP are publicly available for download on the University of Michigan’s DeepBlue data repository (https://doi.org/10.7302/37yx-9q53). This dataset is provided as a series of folders containing NetCDF files for each site and year, with standardized CF metadata naming conventions. For more detailed information, please see our data paper (https://doi.org/10.1029/2024EA003538). ERA5 data can be downloaded from the Copernicus Climate Data Store.
PIP data preprocessing code is available on our public GitHub repository (https://github.com/frasertheking/pip_processing), and we have provided a custom API for interacting with the particle microphysics data in Python called pipdb (https://github.com/frasertheking/pipdb). The snowfall PCA project code is available on Github (https://github.com/frasertheking/snowfall_pca). Additionally, the code used to fit the DR methods, cluster cases, analyze inputs and generate figures is also available for download on a separate, public GitHub repository (https://github.com/frasertheking/umap).
References
Alizadeh, O. (2022). Advances and challenges in climate modeling. Climatic Change, 170(1), 18. https://doi.org/10.1007/s10584-021-03298-4
Auton, A., Abecasis, G. R., Altshuler, D. M., Durbin, R. M., Abecasis, G. R., Bentley, D. R., Chakravarti, A., Clark, A. G., Donnelly, P., Eichler, E. E., Flicek, P., Gabriel, S. B., Gibbs, R. A., Green, E. D., Hurles, M. E., Knoppers, B. M., Korbel, J. O., Lander, E. S., Lee, C., … National Eye Institute, N. (2015). A global reference for human genetic variation. Nature, 526(7571), 68-74. https://doi.org/10.1038/nature15393
Cooper, S. J., Wood, N. B., & L’Ecuyer, T. S. (2017). A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations. Atmospheric Measurement Techniques, 10(7), 2557-2571. https://doi.org/10.5194/amt-10-2557-2017
Dolan, B., Fuchs, B., Rutledge, S. A., Barnes, E. A., & Thompson, E. J. (2018). Primary Modes of Global Drop Size Distributions. Journal of the Atmospheric Sciences, 75(5), 1453-1476. https://doi.org/10.1175/JAS-D-17-0242.1
Duffy, G., & Posselt, D. J. (2022). A Gamma Parameterization for Precipitating Particle Size Distributions Containing Snowflake Aggregates Drawn from Five Field Experiments. Journal of Applied Meteorology and Climatology, 61(8), 1077-1085. https://doi.org/10.1175/JAMC-D-21-0131.1
IPCC, 2023: Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, H. Lee and J. Romero (eds.)]. IPCC, Geneva, Switzerland, pp. 35-115, doi: 10.59327/IPCC/AR6-9789291691647.
Karpatne, A., Ebert-Uphoff, I., Ravela, S., Babaie, H. A., & Kumar, V. (2019). Machine Learning for the Geosciences: Challenges and Opportunities. IEEE Transactions on Knowledge and Data Engineering, 31(8), 1544-1554. https://doi.org/10.1109/TKDE.2018.2861006
King, F., Pettersen, C., Bliven, L. F., Cerrai, D., Chibisov, A., Cooper, S. J., L’Ecuyer, T., Kulie, M. S., Leskinen, M., Mateling, M., McMurdie, L., Moisseev, D., Nesbitt, S. W., Petersen, W. A., Rodriguez, P., Schirtzinger, C., Stuefer, M., von Lerber, A., Wingo, M. T., … Wood, N. (2024). A Comprehensive Northern Hemisphere Particle Microphysics Data Set From the Precipitation Imaging Package. Earth and Space Science, 11(5), e2024EA003538. https://doi.org/10.1029/2024EA003538
McInnes, L., Healy, J., & Astels, S. (2017). hdbscan: Hierarchical density based clustering. The Journal of Open Source Software, 2(11), 205. https://doi.org/10.21105/joss.00205
McInnes, L., Healy, J., & Melville, J. (2020). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction (arXiv:1802.03426). arXiv. https://doi.org/10.48550/arXiv.1802.03426
Newman, A. J., Kucera, P. A., & Bliven, L. F. (2009). Presenting the Snowflake Video Imager (SVI). Journal of Atmospheric and Oceanic Technology, 26(2), 167-179. https://doi.org/10.1175/2008JTECHA1148.1
Pettersen, C., Bliven, L. F., von Lerber, A., Wood, N. B., Kulie, M. S., Mateling, M. E., Moisseev, D. N., Munchak, S. J., Petersen, W. A., & Wolff, D. B. (2020). The Precipitation Imaging Package: Assessment of Microphysical and Bulk Characteristics of Snow. Atmosphere, 11(8), Article 8. https://doi.org/10.3390/atmos11080785
Pettersen, C., Bliven, L. F., Kulie, M. S., Wood, N. B., Shates, J. A., Anderson, J., Mateling, M. E., Petersen, W. A., von Lerber, A., & Wolff, D. B. (2021). The Precipitation Imaging Package: Phase Partitioning Capabilities. Remote Sensing, 13(11), Article 11. https://doi.org/10.3390/rs13112183
Sturm, M., Goldstein, M. A., & Parr, C. (2017). Water and life from snow: A trillion dollar science question. Water Resources Research, 53(5), 3534-3544. https://doi.org/10.1002/2017WR020840
Wood, N. B., & L’Ecuyer, T. S. (2021). What millimeter-wavelength radar reflectivity reveals about snowfall: An information-centric analysis. Atmospheric Measurement Techniques, 14(2), 869-888. https://doi.org/10.5194/amt-14-869-2021
Zhou, C., Wang, H., Wang, C., Hou, Z., Zheng, Z., Shen, S., Cheng, Q., Feng, Z., Wang, X., Lv, H., Fan, J., Hu, X., Hou, M., & Zhu, Y. (2021). Geoscience knowledge graph in the big data era. Science China Earth Sciences, 64(7), 1105-1114. https://doi.org/10.1007/s11430-020-9750-4
Source link
#Decoding #Nonlinear #Signals #Large #Observational #Datasets