aPriori Documentation
  • 👋Welcome to aPriori
  • Getting started
    • What is aPriori?
    • Installation
    • Quickstart
  • Fundamentals and usage
    • aPriori Fundamentals
      • Data Formatting
      • Cut a 3D scalar
      • Filter a 3D scalar field
      • Initialize a DNS field
      • Data visualization
      • Cut a DNS field
      • Filter a DNS field
    • Machine Learning Tutorials
      • Data-Driven Closure for Turbulence-Chemistry interaction
      • Dynamic Data-Driven Smagorinky Closure for LES
  • API guide
    • Field3D
      • Field3D.build_attributes_list
      • Field3D.check_valid_attribute
      • Field3D.compute_chemical_timescale
      • Field3D.compute_kinetic_energy
      • Field3D.compute_mixing_timescale
      • Field3D.compute_residual_kinetic_energy
      • Field3D.compute_residual_dissipation_rate
      • Field3D.compute_reaction_rates
      • Field3D.compute_reaction_rates_batch
      • Field3D.compute_strain_rate
      • Field3D.compute_tau_r
      • Field3D.compute_velocity_module
      • Field3D.cut
      • Field3D.filter_favre
      • Field3D.filter
      • Field3D.find_path
      • Field3D.plot_x_midplane
      • Field3D.plot_y_midplane
      • Field3D.plot_z_midplane
      • Field3D.print_attributes
      • Field3D.update
    • Scalar3D
      • Scalar3D.is_light_mode
      • Scalar3D.reshape_3d
      • Scalar3D.reshape_column
      • Scalar3D.reshape_line
      • Scalar3D.cut
      • Scalar3D.filter_gauss
      • Scalar3D.plot_x_midplane
      • Scalar3D.plot_y_midplane
      • Scalar3D.plot_z_midplane
    • Mesh3D
  • BIBLIOGRAPHY
Powered by GitBook
On this page
  • Prerequisites
  • Learner Profile
  • Dataset Description
  • What you will learn
  • A-priori methodology
  • Data Processing
  • Compute reaction rates on the DNS grid
  • Filter the DNS field
  • Compute reaction rates on the filtered field
  • Compute other interesting variables
  • Neural network training and testing
  • Data preprocessing
  • Model architecture
  • Training Loop
  • Results visualization
  1. Fundamentals and usage
  2. Machine Learning Tutorials

Data-Driven Closure for Turbulence-Chemistry interaction

PreviousMachine Learning TutorialsNextDynamic Data-Driven Smagorinky Closure for LES

Last updated 8 months ago

Prerequisites


Before reading this tutorial, you should know a bit of Python. If you would like to refresh your memory, take a look at the .

We are going to work on the reduced that is available on the . Make sure to download the data folder before starting.

The source code is available in the tutorials folder.

Learner Profile


This tutorial is aimed at a user who has a solid understanding of Computational Fluid Dynamics (CFD) for Combustion. A basic understanding of Machine learning is not required.

The tutorial could also be beneficial for those who do not have strong competencies in CFD but know Data Science; in fact, apart from some computations to compute interesting quantities in reacting flows, the basic operations that we are going to perform on the data are based on filtering of 3D fields.

Dataset Description


The dataset is extracted from a DNS simulation of a . The variables saved comprise Temperature, Species Mass Fractions, 3 Velocity components, and Pressure. The chemical mechanism used comprises 9 species.

What you will learn

You will learn:

  • how to use aPriori to compute the chemical source terms,

  • how to filter the results obtained to resemble a Large Eddy Simulation (LES) field,

  • and build a data-driven closure based on Neural Networks to model the subgrid interactions between Turbulence and Chemistry.

A-priori methodology

While DNS simulations resolve all the scales of turbulence, LES simulations just resolve for the filtered quantities. With this approximation, it is possible to use grids with a spacing that is larger than the Kolmogorov scale. The grid spacing acts as a spatial filter. This means that the unfiltered quantities must be modelled.

In this section, we are interested in the chemical source terms, that appear in the filtered species transport equations and the energy/enthalpy equation:

LES Filtered Species Transport Equation

∂(ρˉY~k)∂t+∂(ρˉY~ku~i)∂xi=−∂jk,isgs∂xi+ω˙ˉk\frac{\partial (\bar{\rho} \tilde{Y}_k)}{\partial t} + \frac{\partial (\bar{\rho} \tilde{Y}_k \tilde{u}_i)}{\partial x_i} = - \frac{\partial j_{k,i}^{sgs}}{\partial x_i} + \bar{\dot{\omega}}_k∂t∂(ρˉ​Y~k​)​+∂xi​∂(ρˉ​Y~k​u~i​)​=−∂xi​∂jk,isgs​​+ω˙ˉk​

where:

  • Y~k\tilde{Y}_kY~k​ is the Favre-averaged mass fraction of species kkk.

  • jk,isgsj_{k,i}^{sgs}jk,isgs​ is the subgrid-scale diffusive flux of species kkk.

  • ω˙ˉk\bar{\dot{\omega}}_kω˙ˉk​ is the filtered chemical source term for species kkk.

In this section, we will take into account the closure of the chemical source terms. From the DNS data, we can directly compute the reaction rates with Cantera (see figure below). This result will be very accurate because it takes into account the interaction between turbulence and chemistry on a grid which solves the transport and chemical phenomena at the fine structures level. However, in an LES simulation, we don't have this detailed information. In LES, we solve for the filtered temperature, velocity, pressure, and species concentration. To obtain a field that resembles the information we can access in LES, we can filter the original DNS field. From this filtered and less accurate field, we can then compute again the reaction rates. These values will be less accurate than before because, in this case, the transport phenomena were not solved until the Kolmogorov scale.

Our goal is to compare the values obtained from the DNS reaction rates filtered with the reaction rates computed from the 'LES-like' field. To do so, we are gonna introduce a correction factor, γ\gammaγ. In other models in the literature (e.g. the Partially-Stirred Reactor model), this quantity represents the cell reacting fraction. In our data-driven model, this will in general assume the meaning of a subgrid correction to take into account the interaction between turbulence and chemistry.

To train our Machine Learning (ML) model, we will use the dataset described above. However, the DNS data must be processed to retrieve the information regarding the subgrid quantities.

Data Processing

Compute reaction rates on the DNS grid

First of all, looking at the dataset we notice that the species reaction rates are not saved. This is because DNS dataset can occupy a lot of space, so the reaction rates are generally not saved to save memory. The first step will be to compute them on the DNS grid. This step corresponds to what in Figure 2 is highlighted with a green arrow (the lower one).

"""
Created on Fri May 24 14:50:44 2024

@author: lorenzo piu
"""
>>> import os
>>> from aPrioriDNS.DNS import Field3D

>>> # Comment the following line if you already downloaded the dataset
>>> ap.download()

>>> directory = os.path.join('', 'Lifted_H2_subdomain') # change this with your path to the data folder
>>> T_path = os.path.join(directory,'data', 'T_K_id000.dat')
>>> print(f"\nChecking the path \'{T_path}\' is correct...")
>>> if not os.path.exists(T_path):
...    raise ValueError("The path '{T_path}' does not exist in your system. Check to have the correct path to your data folder in the code")

>>> # Initialize the DNS field
>>> DNS_field = Field3D(directory)

>>> # Compute the reaction rates
>>> DNS_field.compute_reaction_rates()

After a long output, you should obtain the reaction rates in the original folder.

Filter the DNS field

At this point, we have to filter the DNS field to obtain the LES-like field. The choice to compute the reaction rates before this operation was not random. In fact, the filtering function implemented in this library filters all the variables inside the data folder. Now that we computed the reaction rates, they will be filtered together with the other variables. This means that with the same command we can perform what is represented in Figure 2 from the blue vertical arrow, and from the blue horizontal arrow.

>>> # FILTERING
>>> filter_size = 16

>>> # filter DNS field and initialize filtered field
>>> filtered_field = Field3D(DNS_field.filter_favre(filter_size))
Output
---------------------------------------------------------------
Filtering Field '../../data/Lifted_H2_subdomain'...
Done Filtering Field '../../data/Lifted_H2_subdomain'.
Filtered Field path: '../../data/Filter16FavreGauss'.

---------------------------------------------------------------
Initializing 3D Field

Checking files inside folder ../../data/Filter16FavreGauss...

Folder structure OK

---------------------------------------------------------------
Building mesh attribute...
Mesh fields read correctly

---------------------------------------------------------------
Reading kinetic mechanism...
Kinetic mechanism file found: ../../data/Filter16FavreGauss/chem_thermo_tran/li_h2.yaml
Species:
['H2', 'O2', 'H2O', 'H', 'O', 'OH', 'HO2', 'H2O2', 'N2']

---------------------------------------------------------------
Building scalar attributes...
Field attributes:
+-----------+------------------------------------------------------------------+
| Attribute |                               Path                               |
+-----------+------------------------------------------------------------------+
|     P     |        ../../data/Filter16FavreGauss/data/P_Pa_id000.dat         |
|    RHO    |      ../../data/Filter16FavreGauss/data/RHO_kgm-3_id000.dat      |
|     T     |         ../../data/Filter16FavreGauss/data/T_K_id000.dat         |
|    U_X    |       ../../data/Filter16FavreGauss/data/UX_ms-1_id000.dat       |
|    U_Y    |       ../../data/Filter16FavreGauss/data/UY_ms-1_id000.dat       |
|    U_Z    |       ../../data/Filter16FavreGauss/data/UZ_ms-1_id000.dat       |
|    YH2    |         ../../data/Filter16FavreGauss/data/YH2_id000.dat         |
|    YO2    |         ../../data/Filter16FavreGauss/data/YO2_id000.dat         |
|   YH2O    |        ../../data/Filter16FavreGauss/data/YH2O_id000.dat         |
|    YH     |         ../../data/Filter16FavreGauss/data/YH_id000.dat          |
|    YO     |         ../../data/Filter16FavreGauss/data/YO_id000.dat          |
|    YOH    |         ../../data/Filter16FavreGauss/data/YOH_id000.dat         |
|   YHO2    |        ../../data/Filter16FavreGauss/data/YHO2_id000.dat         |
|   YH2O2   |        ../../data/Filter16FavreGauss/data/YH2O2_id000.dat        |
|    YN2    |         ../../data/Filter16FavreGauss/data/YN2_id000.dat         |
|  RH2_DNS  |  ../../data/Filter16FavreGauss/data/RH2_DNS_kgm-3s-1_id000.dat   |
|  RO2_DNS  |  ../../data/Filter16FavreGauss/data/RO2_DNS_kgm-3s-1_id000.dat   |
| RH2O_DNS  |  ../../data/Filter16FavreGauss/data/RH2O_DNS_kgm-3s-1_id000.dat  |
|  RH_DNS   |   ../../data/Filter16FavreGauss/data/RH_DNS_kgm-3s-1_id000.dat   |
|  RO_DNS   |   ../../data/Filter16FavreGauss/data/RO_DNS_kgm-3s-1_id000.dat   |
|  ROH_DNS  |  ../../data/Filter16FavreGauss/data/ROH_DNS_kgm-3s-1_id000.dat   |
| RHO2_DNS  |  ../../data/Filter16FavreGauss/data/RHO2_DNS_kgm-3s-1_id000.dat  |
| RH2O2_DNS | ../../data/Filter16FavreGauss/data/RH2O2_DNS_kgm-3s-1_id000.dat  |
|  RN2_DNS  |  ../../data/Filter16FavreGauss/data/RN2_DNS_kgm-3s-1_id000.dat   |
|  HRR_DNS  | ../../data/Filter16FavreGauss/data/HRR_DNS_kgm2s-3_DNS_id000.dat |
|    Mu     |     ../../data/Filter16FavreGauss/data/Mu_kgm-1s-1_id000.dat     |
+-----------+------------------------------------------------------------------+

Compute reaction rates on the filtered field

Now that we have the LES-like field, we can compute the reaction rates from the filtered quantities (green arrow on top in Figure 2).

>>> filtered_field.compute_reaction_rates()

Compute other interesting variables

Now we have the reaction rates (and heat release rate, which is computed from the same utility). Some other variables can be useful to train our Neural Network, and we're going to compute them in the following lines of code

>>> # compute the strain rate on the filtered field
>>> filtered_field.compute_strain_rate(save_tensor=True)

>>> # compute the residual dissipation rate with Smagorinsky model
>>> filtered_field.compute_residual_dissipation_rate(mode='Smag')

>>> # compute residual kinetic energy
>>> filtered_field.compute_residual_kinetic_energy()

>>> # compute chemical timescale with SFR, FFR and Chomiak model
>>> filtered_field.compute_chemical_timescale(mode='SFR')
>>> filtered_field.fuel = 'H2'
>>> filtered_field.ox = 'O2'
>>> filtered_field.compute_chemical_timescale(mode='Ch')
>>> filtered_field.compute_mixing_timescale(mode='Kolmo')

Neural network training and testing

This chapter supposes that the reader is already familiar with Neural Networks (NNs) and knows how to code using Pytorch. In case you're not familiar with these concepts, I strongly suggest you the following sources:

Data preprocessing

The goal is to use as input of the NN some variables that are available at LES scale, to regress some subgrid scale values. This is similar to the more classic approach used to model subgrid values, that is using equations that from the filtered values compute the subfilter quantities. However, here we are going to use Machine Learning instead of a mathematical equation.

First of all, it's necessary to choose the equation to use for the closure. In the literature, the Eddy Dissipation Concept (EDC), and the Partially Stirred Reactor model (PaSR), both use a multiplication factor, γ\gammaγ, that represents the fraction of the CFD cell that contains the fine structures (that is where chemical reactions mainly happen). We're gonna use this same equation to correct the source terms taking into account the subgrid phenomena:

ωk˙ˉ=γω˙kLFR\bar{\dot{\omega_k}} = \gamma \dot{\omega}^{LFR}_kωk​˙​ˉ​=γω˙kLFR​

where

  • ω˙kLFR=ω˙k(T~,Y~k)\dot{\omega}_k^{LFR}=\dot{\omega}_k(\tilde{T},\tilde{Y}_k)ω˙kLFR​=ω˙k​(T~,Y~k​) is the reaction rate for the specie kkk, computed with the Laminar Finite Rate approximaiton, that is from the filtered quantities.

  • γ\gammaγ is the corrective term, that will be computed with a ML model.

To regress this correction term, again, we need as input of our NN some variables available at LES scale. The 3 input variables chosen for this tutorial are temperature, strain rate, and slowest chemical timescale. This choice was made because the cell reacting fraction strongly depends on the interaction between turbulence and chemistry, and hence quantities that describe both these phenomena were chosen.

>>> ###############################################################################
>>> #                        Machine Learning Closure
>>> ###############################################################################
>>> import torch
>>> import torch.nn as nn
>>> import torch.optim as optim
>>> import numpy as np
>>> import matplotlib.pyplot as plt

>>> # DATA PROCESSING
>>> shape  = filtered_field.shape
>>> length = shape[0]*shape[1]*shape[2]
>>> T      = filtered_field.T.value.reshape(length,1) # extract the valeue of Temperature from the filtered field and rehape it as a column vector
>>> S      = filtered_field.S_LES.value.reshape(length,1)
>>> Tau_c  = filtered_field.Tau_c_SFR.value.reshape(length,1)

>>> HRR_LFR = filtered_field.HRR_LFR.value.reshape(length,1)
>>> HRR_DNS = filtered_field.HRR_DNS.value.reshape(length,1)

>>> # Data scaling
>>> T      = T-np.min(T)/(np.max(T)-np.min(T))
>>> S      = np.log10(S)
>>> S      = S-np.min(S)/(np.max(S) - np.min(S))
>>> Tau_c  = np.log10(Tau_c)
>>> Tau_c  = Tau_c-np.min(Tau_c) / (np.max(Tau_c)-np.min(Tau_c))

>>> # Build the training vector
>>> X = np.hstack([T, S, Tau_c])

>>> # Divide between train and test data
>>> X_train, X_test, HRR_LFR_train, HRR_LFR_test, HRR_DNS_train, HRR_DNS_test = train_test_split(
...     X, HRR_LFR, HRR_DNS, test_size=0.9, random_state=42)

>>> X_train = torch.tensor(X_train, dtype=torch.float32)
>>> X_test  = torch.tensor(X_test,  dtype=torch.float32)
>>> HRR_LFR_train = torch.tensor(HRR_LFR_train, dtype=torch.float32)
>>> HRR_LFR_test  = torch.tensor(HRR_LFR_test,  dtype=torch.float32)
>>> HRR_DNS_train = torch.tensor(HRR_DNS_train, dtype=torch.float32)
>>> HRR_DNS_test  = torch.tensor(HRR_DNS_test,  dtype=torch.float32)

Model architecture

To define the architecture in Pytorch, we have to define a class. This will be a subclass of torch.nn.Module.

The architecture that we're going to build is shown in Figure 3:

The choice of the optimal input variables is a topic of research at the moment, and an extremely interesting parameter to play with

>>> # NN class definition
>>> class NeuralNetwork(nn.Module):
...     def __init__(self, input_size, hidden_size, output_size, num_layers):
...         super(NeuralNetwork, self).__init__()
...         self.layers = nn.ModuleList() # initialize the layers list as an empty list using nn.ModuleList()
...         self.layers.append(nn.Linear(input_size, hidden_size)) # Add the first input layer. The layer takes as input <input_size> neurons and gets as output <hidden_size> neurons
...         for _ in range(num_layers - 1):
...             self.layers.append(nn.Linear(hidden_size, hidden_size)) # Add hidden layers
...         self.layers.append(nn.Linear(hidden_size, output_size)) # add output layer

...     def forward(self, x):    # Function to perform forward propagation
...         for layer in self.layers[:-1]:
...             x = torch.relu(layer(x))
...         x = self.layers[-1](x)
...         return x
    
>>> # NN architecture
>>> input_size = 3
>>> num_layers = 6
>>> hidden_size = 64
>>> output_size = 1
>>> model = NeuralNetwork(input_size, hidden_size, output_size, num_layers)

>>> # Define loss function and optimizer
>>> criterion = nn.MSELoss()
>>> optimizer = optim.Adam(model.parameters()) # here we are using the Adam optimizer, to optimize model.parameters, but what is there inside this attribute?

Training Loop

For the training process, we have to choose a function to optimize. A variable that can be helpful in our case is the Heat Release Rate (HRR). This is computed from the librari aPriori together with the

>>> # transfer on GPU
>>> device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
>>> # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # if you're using a device with cuda 
>>> model = model.to(device)
>>> X_train = X_train.to(device)
>>> X_test = X_test.to(device)
>>> HRR_LFR_train = HRR_LFR_train.to(device)
>>> HRR_DNS_train = HRR_DNS_train.to(device)
>>> HRR_LFR_test = HRR_LFR_test.to(device)
>>> HRR_DNS_test = HRR_DNS_test.to(device)

>>> # Move the optimizer's state to the same device
>>> for state in optimizer.state.values():
>>>     for k, v in state.items():
>>>         if isinstance(v, torch.Tensor):
>>>             state[k] = v.to(device)

>>> # Lists to store training loss and testing loss
>>> train_loss_list = []
>>> test_loss_list = []

>>> # Training loop
>>> num_epochs = 1000
>>> for epoch in range(num_epochs):
...     # Forward pass
...     output = model(X_train)

>>>     # compute loss on training data
>>>     loss = criterion(output*HRR_LFR_train, HRR_DNS_train)

>>>     # Compute loss on testing data. NOTE: we aren't gonna use the test loss for optimization!!!
>>>     with torch.no_grad():
...         output_test = model(X_test)
...         loss_test = criterion(output_test*HRR_LFR_test, HRR_DNS_test)

>>>     # Backprop and optimize
>>>     optimizer.zero_grad()
>>>     loss.backward()
>>>     optimizer.step()

>>>     train_loss_list.append(loss.item()) # Save the losses
>>>     test_loss_list.append(loss_test.item())

>>>     if (epoch + 1) % 1 == 0:
...         print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}, Test loss: {loss_test.item():.4f}')

>>> # Plot training and testing loss
>>> plt.plot(train_loss_list, label='Training Loss')
>>> plt.plot(test_loss_list, label='Testing Loss')
>>> plt.xlabel('Epoch')
>>> plt.ylabel('Loss')
>>> plt.title('Training and Testing Loss')
>>> plt.yscale('log')
>>> plt.legend()
>>> plt.show()

Results visualization

We can visualize the results obtained with the neural network leveraging the plotting utilities.

First, we evaluate the Neural network on the full dataset, so that the output can be viusualized spatially:

# PLOTTING
with torch.no_grad():
    gamma = model(torch.tensor(X, dtype=torch.float32).to(device)).cpu().numpy()

# Visualize the model's accuracy on the full dataset with parity plots
f = ap.parity_plot(HRR_DNS, HRR_LFR, density=True, 
               x_name=r'$\dot{Q}_{DNS}$',
               y_name=r'$\dot{Q}_{LFR}$',
               cbar_title=r'$\rho_{KDE}/max(\rho_{KDE})$',
               )
f = ap.parity_plot(HRR_DNS, gamma*HRR_LFR,density=True, 
               x_name=r'$\dot{Q}_{DNS}$',
               y_name=r'$\dot{Q}_{ML}$',
               cbar_title=r'$\rho_{KDE}/max(\rho_{KDE})$',
               )

Another interesting visualization can be done using the contour plot utility:

# Extract the z midplane
gamma_2D = gamma.reshape(filtered_field.shape)[:,:,filtered_field.shape[2]//2] # extract the z midplane of gamma
HRR_LFR_2D = HRR_LFR.reshape(filtered_field.shape)[:,:,filtered_field.shape[2]//2]# extract the z midplane
HRR_ML_2D = gamma_2D * HRR_LFR_2D
HRR_DNS_2D = HRR_DNS.reshape(filtered_field.shape)[:,:,filtered_field.shape[2]//2]# extract the z midplane

# plot the error on the z midplane
f = ap.contour_plot(filtered_field.mesh.X_midZ*1000,   # Extract x mesh on the z midplane
                    filtered_field.mesh.Y_midZ*1000,   # Extract y mesh on the z midplane
                    np.abs(HRR_LFR_2D-HRR_DNS_2D),
                    vmax=1.5e10,
                    colormap='Reds',
                    x_name='x [mm]',
                    y_name='y [mm]',
                    title=r'$|\dot{Q}_{LFR}-\dot{Q}_{DNS}|$'
                    )

f = ap.contour_plot(filtered_field.mesh.X_midZ*1000,   # Extract x mesh on the z midplane
                    filtered_field.mesh.Y_midZ*1000,   # Extract y mesh on the z midplane
                    np.abs(HRR_ML_2D-HRR_DNS_2D),
                    vmax=1.5e10,
                    colormap='Reds',
                    x_name='x [mm]',
                    y_name='y [mm]',
                    title=r'$|\dot{Q}_{LFR}-\dot{Q}_{DNS}|$'
                    )

# Visualize the NN output
f = ap.contour_plot(filtered_field.mesh.X_midZ*1000,   # Extract x mesh on the z midplane
                    filtered_field.mesh.Y_midZ*1000,   # Extract y mesh on the z midplane
                    gamma_2D,
                    colormap='viridis',
                    x_name='x [mm]',
                    y_name='y [mm]',
                    title=r'$\gamma_{NN}$'
                    )

Steve Brunton's 'Data Driven Engineering'.

This from 3blue1brown is an insightful and clear explanation.

Pytorch and are a complete source for learning this library.

The of the class of Data-Driven Engineering for the 1st year master's students at Université Libre de Bruxelles. This source is less detailed but can provide quick, straightforward information if you already have a basic understanding of the topic.

book
video series
Documentation
Tutorials
exercise
Python tutorial
dataset
project's GitHub page
lifted non-premixed hydrogen flame
Figure 1 - Subdomain of the DNS dataset used for this tutorial
Figure 2 - Methodology to process DNS data to train a Neural Network that corrects the chemical source terms closure.
Figure 3 - Model architecture
Figure 3 - Training process convergence
Figure 4 - Parity plots representing the error with LFR and with the ML models
Figure 5 - Error with respect to the DNS results plotted on the z midplane for LFR and ML models
Figure 6 - Neural Network output