Download PDF
Review  |  Open Access  |  17 Oct 2023

Recent advances in the interface structure prediction for heteromaterial systems

Views: 428 |  Downloads: 94 |  Cited:   0
J Mater Inf 2023;3:22.
10.20517/jmi.2023.24 |  © The Author(s) 2023.
Author Information
Article Notes
Cite This Article

Abstract

The atomic structures of solid-solid interfaces in materials are of fundamental importance for understanding the physical properties of interfacial materials, which is, however, difficult to determine both in experimental and theoretical approaches. New theoretical methodologies utilizing various global optimization algorithms and machine learning (ML) potentials have emerged in recent years, offering a promising approach to unraveling interfacial structures. In this review, we give a concise overview of state-of-the-art techniques employed in the studies of interfacial structures, e.g., ML-assisted phenomenological theory for the global search of interface structure (ML-interface). We also present a few applications of these methodologies.

Keywords

Solid-solid interfaces, machine learning, ML-interface, interface structure prediction

INTRODUCTION

Solid-solid interfaces are widely present in materials and have a significant impact on their physical and chemical properties[1-3]. They are crucial scientific questions in multiple important fields. For example, in field-effect transistors (FET)[4], the semiconductor/dielectric interface is a key structural element for functionality, and the quality of the interface directly affects the transistor performance[5-7]. Similarly, in all-solid-state batteries, the electrode and solid electrolyte interface significantly affect the migration of Li+/Na+ ions and determine the overall battery performance[8-11]. In photocatalysis or photovoltaic devices, interfaces control the molecular and diffusion processes of charge carriers, thus influencing the efficiency of photocatalysis and photoelectric conversion[12].

However, the determination of interface structures poses challenges both in experimental and theoretical aspects[13]. Experimental techniques, such as high-resolution transmission electron microscopy (HRTEM), face difficulties in characterizing interface structures and compositions because they are hidden within the bulk of the material[14,15]. Moreover, interfaces can exhibit complex structures due to the presence of unknown defects resulting from the atomic arrangement at the junction of two chemically distinct environments. On the other hand, theoretical predictions of interface structures lack comprehensive and rapid methods to consider various possible interface orientations and accurately describe the bonding interactions at the interface[13,16-20]. Therefore, the development of effective methods for predicting interface structures can significantly advance progress in multiple fields.

Global optimization, which aims to find the global minimum in a predefined multidimensional space, is a powerful approach to identifying the most stable structure from a given chemical composition[21]. (This process contrasts sharply with local optimization, which focuses on finding the local minimum within a nearby region of a given starting point, potentially missing the global optimum that lies in a different region of the solution space.) The challenge in global optimization is to design algorithms capable of efficiently navigating the expansive solution space and steering toward the metastable interface structure. Various sophisticated algorithms, such as basin-hopping[22], genetic algorithms[13,23], simulated annealing[24,25], particle swarm optimization[26,27], differential evolution[28], basin-hopping[22], random sampling[29], evolutionary algorithms[30], and stochastic surface walking (SSW)[31-33], have been developed to tackle these challenges, offering a robust approach to finding solutions that satisfy the global criteria. However, the direct application of global optimization to search the interface structure is questionable. This is because the most stable interface may not be the global minimum but a metastable structure with additional constraints such as lattice matching and inherent restrictions imposed by the neighboring bulks on either side of the interface. Modifications for global optimization need to be applied when the purpose is to find the interface structure.

The theoretical prediction of interface structures can be achieved by the following two common steps: (i) determining the orientation relationship (OR) for lattice matching and (ii) characterizing the atomic structure at the interface. In recent years, there have been notable advances in the theoretical prediction of interface structures. A series of interface structure prediction methods based on global optimization have been proposed[4,12,23,34-37] using different global optimization algorithms. A few structures predicted by these interface structure prediction methods have been validated atom-by-atom using in-situ microscopy experiments[4,37]. This review mainly focuses on selected global optimization methods, namely genetic algorithms[35,37], particle swarm optimization[27,34,38] [implemented in CALYPSO (Crystal Structural Analysis by Particle Swarm Optimization)], and SSW[4,12,39-42], while many other global optimization algorithms[22,30] can also be applied in the interface structure search.

On the other hand, machine learning (ML) emerges as a promising technology that can be employed to speed up the prediction of interface structures. There are two ways to accelerate interface structural searches using ML. One is to adopt ML potentials (a discriminative ML model) to accelerate DFT (density functional theory) calculations, as in our proposed ML-interface method[33,43-45]. The other is to employ generative ML models (e.g., GAN[46-48] and VAE[46]) to directly generate structures. At present, generative ML models are primarily utilized to generate molecular and bulk structures[49-52], and to our knowledge, there has been no report on their application in generating interface structures.

In this review, we first briefly introduce the advances of other groups in this area. Then, we present our recently developed ML-interface method, which is based on phenomenological theory and ML potentials. Finally, we discuss the challenges and open questions in this field and the possible applications of generative ML models in future works.

MODELS AND STABILITIES FOR INTERFACE

There are two types of models for interface structure simulations [Figure 1]. The first one is the “...|vacuum|A|B|vacuum|A|B|vacuum|...” model [Figure 1A], and the second one is the “...|A|B|A|B|A|...” model, where “|” represents the interface or surface, and “...” denotes the periodic boundary condition [Figure 1B]. The first type of model includes the A|B interface and the A|vacuum and B|vacuum surfaces. In this configuration, the c-axis orientation of the unit cell does not affect the matching of interface atoms; it only needs to be perpendicular to the interface. The properties of this model are influenced not only by the A|B interface but also by the A|vacuum and B|vacuum surfaces, so it is necessary to distinguish the effects of the interface and surfaces during the research. The second type of model includes only the A|B interface. In this case, the orientation of the c-axis affects the atomic matching at the interface, making the model construction more complex compared to the first type, as it needs to simultaneously match both the upper and lower interfaces.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 1. (A) Slab model with an interface region, two bulk phase regions, and vacuum in the unit cell; (B) The model with two equivalent interface regions and two bulk phase regions in the simulated cell. Reproduced with permission[35]. Copyright 2020, Elsevier.

To estimate the stability of interfaces, we need to calculate the interface energy. For the first type of model, the formula to calculate the interface energy is:

$$ \gamma_{A|B}=\left(\mathrm{E}_\mathrm{A|B}-\mathrm{E}_\mathrm{A}-\mathrm{E}_\mathrm{B}\right) / 2 \mathrm{S}-\mathrm{\gamma_{A}}-\mathrm{\gamma_{B}} $$

where γA|B is the interface energy; EA|B, EA, EB are the DFT total energy for the interface, reference bulk A, and reference bulk B, respectively; S is the interfacial area; γA and γB are the surface energy of A|vacuum and B|vacuum, respectively.

If the A|B interface and the B|A interface are equivalent, meaning that the two bulk phase regions exhibit a mirror or inversion symmetry, then formula 1 directly calculates the interface energy. Otherwise, formula 1 gives the average of γA|B and γB|A, and there is no simple way to calculate the individual γA|B and γB|A.

For the second model, the interface energy is calculated as follows:

$$ \gamma_{\mathrm{A|B}}=\left(\mathrm{E}_{\mathrm{A|B}}-\mathrm{E}_{\mathrm{A}}-\mathrm{E}_{\mathrm{B}}\right) / 2 \mathrm{S} $$

Similarly, individual γA|B can only be obtained when the A|B interface and the B|A interfaces are equivalent.

METHODS FOR INTERFACE STRUCTURE PREDICTION

Directly gluing two phases with manually selected orientation relationships

A simple approach for constructing interface models is to glue two phases with selected ORs by manually comparing the crystal structures to identify the lattice-matched interface. For instance, Moreno et al. construct the rutile-TiO2(110)//rocksalt-TiN(100) interface [Figure 2A] in this way[53]. They found that the TiO2(110) has surface lattice parameters of a[001] = 2.96 Å, $$ \mathrm{b}_{[1 \overline{1} 0]} $$ = 6.50 Å, and θa^b = 90°, while TiN(100) has a[001] = 4.26 Å, b[010] = 4.26 Å, and θa^b = 90°. According to the surface lattice parameters, they determined that a 3 × 2 supercell of TiO2(110) can match with a 2 × 3 supercell of TiN(100), with a low strain of 4%, so these surface supercells were glued in a single unit cell to build the TiO2//TiN interface [Figure 2B]. The atoms at the interface are then relaxed using geometry optimization, which produces the final TiO2//TiN interface structure.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 2. (A) Surface structures of rutile TiO2(110) and rock-salt TiN(100); (B) The relaxed interface structure of TiO2(110)//TiN(100). Colors in the figures: Grey balls are Ti; red balls are O; blue balls are N. Reproduced with permission[53]. Copyright 2017, American Chemical Society.

It should be noted that the above interface structure prediction method did not fully consider the complexities of real interfaces. For example, when finding the lattice-matched interfaces, it only expands the unit cell along the axis but without considering the possibility of non-axisymmetric changes in the unit cell [e.g., the case of $$ \sqrt{2} $$ × $$ \sqrt{2} $$ TiN(100)]. As a result, many ORs have been missed. On the other hand, during atomic relaxation, only geometry optimization is performed. This method is susceptible to the initial structure and can easily be trapped in a local minimum.

A more sophisticated approach to finding the lattice-matched interface is proposed by Zur and McGill[34]. They first select a few Miller indices for each phase to generate corresponding two-dimensional (2D) lattices. Then, the 2D lattices are reduced to canonical ones using a Niggli reduction approach[54-56]. Subsequently, potential superlattices that meet the given criteria are enumerated, enabling a comparison to ascertain whether two 2D lattices share a common superlattice.

On the other hand, the subsequent geometry optimization used here can only find the stable structure within a nearby region of a given starting structure, potentially missing the global minimum that lies in a different region of the structure space. To overcome these issues, it is desirable to employ more sophisticated optimization methods, e.g., molecular dynamics (MDs) or global optimization.

Automated lattice-match method combined with ab into molecular dynamics

Gao et al. developed an automated lattice-matched method that takes the structures of two arbitrary surfaces described by surface unit cells with primitive vectors as input[35]. In the first step, supercells are constructed for the two non-interacting surfaces starting from the corresponding surface unit cells. In the second step, the supercells are aligned and matched to create a supercell representation of the interface. The interface supercell is then accepted and added to the pool of interface structures if the strain between the two surfaces is below the given thresholds. The procedure is reiterated for all the possible supercells that can be constructed starting from the two surface unit cells.

Using the automated lattice-matched method, Tang et al. screened a few low-strain ORs for interfaces in the all-solid-state Na-ion battery, which comprises a layered NaCoO2 (O3 phase) cathode, Na3PS4 (cubic phase) solid electrolyte, Na metal anode, and optionally an α-Al2O3 buffer layer. So, there are NaCoO2//Na3PS4, Na3PS4//Na, NaCoO2//Al2O3, Na3PS4//Al2O3, and Al2O3//Al interfaces in the battery[8]. Ab initio MDs[8] were carried out to simulate the evolution of these interfaces. Changes in bonding were tracked over the MD trajectory by comparing the radial distribution functions (RDF) at the interface with those of known reference crystals. The results show that the formation of compounds containing SO42- and Na3P is preferred at the interface. This finding offers valuable insight for designing stable interfaces between electrodes/solid electrolytes and buffer layer/solid electrolytes, addressing a critical bottleneck in the advancement of all-solid-state sodium-ion batteries.

CALYPSO method

The scheme for interface structure prediction, as implemented in the CALYPSO method, is shown in Figure 3A. The first step is to search the lattice-matched superlattice. This step first generates a series of Miller indices below a threshold. For each Miller index, this method enumerates all possible superlattices defined by 2D surface vectors below another threshold (similar to the Zur-McGill approach described in Section “Directly gluing two phases with manually selected orientation relationships”). The surfaces created are then subjected to symmetry operations corresponding to the space group of the crystal structure, which effectively eliminates symmetrically equivalent surfaces. Subsequently, the lattice-mismatch strain between the superlattices of materials A and B is assessed, and the pairs with small strains are preserved as a well-matched interface.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 3. (A) Flow chart of an interface structure prediction method using the CALYPSO method; (B) Stable structures of s-GB, r-GB, and p-GB of rutile TiO2. The light and dark blue balls represent six- and five-coordinated Ti, respectively. The red, orange, and pink balls represent three-, four-, and five-coordinated O atoms, respectively. Reproduced with permission[35]. Copyright 2020, Elsevier. CALYPSO: Crystal Structural Analysis by Particle Swarm Optimization.

Based on the lattice-matched superlattice, an initial interface structure is generated according to the bonding constraints at the interface region. Then, particle swarm optimization is applied to predict the stable interface structure. The particle swarm optimization method is a global optimization method that has been successfully applied in predicting structures of a wide range of systems, including periodic crystals[57,58], clusters[59], and reconstructed surfaces[60,61].

The CALYPSO method is used to investigate the structures of three rutile TiO2 grain boundaries (GB), i.e., the stoichiometric GB structure (s-GB) and two novel nonstoichiometric GB structures (r-GB and p-GB). Their structures are depicted in Figure 3B. Each of the three stable GB structures exhibits asymmetry due to a displacement of the two grains. The local coordination environments in s-GB are consistent with those observed in the bulk phase, where Ti and O atoms have coordination numbers (CN) of six and three, respectively. In the case of r-GB, all the O atoms at the GB are coordinated with four neighboring Ti atoms, resembling the local coordination environment of the defective rutile bulk structure with Ti interstitials. In p-GB, apart from the four-coordinated O atoms, there are also five-coordinated O atoms and Ti atoms present at the GB due to the pronounced O deficiency within the GB. Altogether, the predicted GBs of r-GB and p-GB display O-deficient characteristics, suggesting a significant accumulation of oxygen defects within these two GBs.

Adaptive genetic algorithm

The adaptive genetic algorithm (AGA)[23] combines rapid structure exploration employing auxiliary classical potentials with accurate energy evaluation through iterative first-principles calculations. This approach assumes that the superlattices of two phases have been matched and can provide a significant acceleration of the interface structural search, surpassing 103 times faster while preserving the precision of first-principles calculations. In the AGA approach, the interface is represented by the type one model (see Section “MODELS AND STABILITIES FOR INTERFACE”) and is divided into four distinct regions from bottom to top, i.e., fixed bulk, interface, rigid bulk, and vacuum [Figure 4A]. The atoms of the fixed-bulk region are fixed during structure exploration, while the atoms within the interface region are fully relaxed. The atoms in the rigid-bulk region move collectively through rigid-body translations relative to the fixed-bulk region. At last, the vacuum region is introduced to prevent interactions between the top and bottom surfaces of the model when applying periodic boundary conditions.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 4. (A) Schematic representation of the interface model and mating operation in the adaptive genetic algorithm; (B) New grain boundary structures for SrTiO3∑3(112)$$ [1 \overline{1} 0] $$. Reproduced with permission[23]. Copyright 2014, American Chemical Society.

During the mating process, a pair of parent structures are chosen from the population pool, and the offspring is generated by applying the cut-and-paste operation to the interface regions. The probability of selecting a structure as a parent depends on its energy ranking within the pool and follows a Gaussian distribution centered around the lowest-energy structure with a standard deviation of one-quarter of the pool size. The offspring structures maintain the same chemical composition as their parent structures. This method is applied to the investigation of both stoichiometric and nonstoichiometric SrTiO3∑3(112)$$ [1 \overline{1} 0] $$ GBs with a unit cell containing up to 200 atoms [Figure 4B]. Several novel low-energy structures have been discovered, offering novel insights into the composition and stability of the GB.

ML-INTERFACE METHOD

The ML-interface method involves three steps for interface structure prediction[4,41,42]: (i) lattice-matched OR screening; (ii) interface structure generation; (iii) global optimization. Herein, we will elaborate on the details of each step.

Generate lattice correspondences

The OR screening step aims to identify the optimal OR of lattice-matched interfaces using a slightly modified phenomenological theory of Martensitic crystallography (PTMC) method[62,63]. PTMC utilizes the presence of an invariant plane as the fundamental geometric constraint, where the lattice remains unaltered during the phase transition. The key aspect of PTMC involves determining the orientation of the invariant plane through lattice correspondence. Although initially developed for predicting ORs in martensitic transformations, the basic principles and mathematical methods of PTMC can be adapted to predict ORs in solid-solid interfaces. The main difference between these two application scenarios is that in martensitic transformations, the lattice correspondence is explicitly defined by the phase transition channel with the lowest energy barrier. In contrast, in the solid-solid interface, any lattice correspondence is permissible since they are not tied to a specific phase transition. Consequently, the fundamental approach to predicting ORs in lattice-matched interfaces involves sampling numerous lattice correspondences by modifying the unit cell definition and subsequently applying PTMC to calculate the OR of the invariant plane.

In the PTMC, the lattice correspondence is given by the deformation gradient F in Eq. (3).

$$ \mathbf{T F}=\mathbf{M} $$

where T and M are the superlattice parameters of two crystals that compose an interface. Both T and M are (3 × 3) matrices, where each row represents a basis vector of the unit cell. Mathematically, any superlattice parameters can be represented by the product of the lattice parameters of a primitive cell and a transformation matrix, as shown in Eqs. (4) and (5).

$$ \mathbf{T}=\mathbf{A T} \text { ' } $$

$$ \mathbf{M}=\mathbf{B M} \text { ' } $$

T’ and M’ are the lattice parameters of the primitive cell for two crystals. A and B are the transformation matrices. We can generate numerous lattice correspondences by exhausting matrices A and B.

Calculate the orientation of the lattice-matched interface

The Cauchy-Green deformation tensor C is then constructed to eliminate the rotation in F.

$$ \mathbf{C}=\mathbf{F}^{\mathrm{T}} \mathbf{F} $$

Then, we perform an eigendecomposition of C:

$$ \mathbf{C e}_{\mathbf{i}}=\mathrm{I}_{\mathrm{i}} \mathbf{e}_{\mathbf{i}} \quad(\mathrm{i}=1,2,3) $$

Ii and ei are the eigenvalues and eigenvectors, which represent the magnitude and direction of strain during phase transition, respectively. For clarity, we sort the eigenvalues as I1 < I2 < I3. According to PTMC, the existence of an invariant plane requires I2 = 1, I1 < 1, and I3 > 1[62-64]. Otherwise, no invariant plane exists for this lattice correspondence. In our code, we screen out the lattice correspondences that satisfy 0.95 < I2 < 1.05.

Then, we determine the orientation of the lattice-matched interface. The vector e2 represents one strain invariant line (sil) within the lattice-matched interface. Along sil, the lattice is neither stretched nor compressed during lattice transformation. The vector of another strain invariant line (sil2) is calculated by the linear combination of e1 and e3 through formulas (8)-(10):

$$ \mathrm{a}^{2}+\mathrm{c}^{2}=1 $$

$$ \mathrm{a}^{2} \mathrm{I}_{1}+\mathrm{c}^{2} \mathrm{I}_{3}=1 $$

$$ \mathbf{s i l}_\mathbf{2}=\mathrm{{a}\mathbf{e}_\mathbf{1}}+\mathrm{{c}\mathbf{e}_\mathbf{3}} $$

where a and c are the coefficients of linear combination. The normal vector of the lattice-matched interface (also denoted as strain invariant plane), sip is given by:

$$ \mathbf{sip}=\mathbf{e}_\mathbf{2} \times \mathbf{s i l}_\mathbf{2} $$

Calculate miller indices and interface basis vectors

The Miller indices of lattice-matched interfaces are calculated by:

$$ \left(\begin{array}{l} h \\ k \\ l \end{array}\right)=\mathbf{T}^{\mathbf{T}} \cdot \mathbf{\frac{\text { sip }}{d_{h k l}}} $$

where TT is the transpose of superlattice parameters T; (hkl) the Miller indices; dhkl the d-spacing. However, this approach will easily produce high Miller indices. Instead, we adopt the following approach to get the low Miller indices: (i) enumerate a series of low Miller indices {(h’k’l’)} by setting h’, k’, l’ ≤ 5; (ii) Calculate the normal vectors n for the trial Miller indices by using Eq. (13); (iii) determine the optimal low Miller indices that has the smallest angle between n and sip.

$$ \frac{\mathbf{n}}{d_{h^{\prime} k^{\prime \prime l} \prime}}=\left(\mathbf{T}^{\mathbf{T}}\right)^{-1} \cdot\left(\begin{array}{l} h^{\prime} \\ k^{\prime} \\ l^{\prime} \end{array}\right) $$

The calculated Miller indices for the superlattice M are the same as that of T. With the Miller indices, we can finally produce the lattice-matched basis vectors using the Niggli reduction[65-68].

The above strategies in seeking the lattice-matched interface orientations as implemented in the ML-interface method are different from the previous Zur-McGill approaches. Notably, the ML-interface defines the range of matrix elements in the transformation matrix, whereas Zur-McGill directly defines the range of Miller indices. In most systems, both methods could yield the same results. However, for interfaces with high Miler indices that fall outside the range defined by the Zur-McGill approach, such orientations may be missed. The ML-interface method, on the other hand, does not impose direct restrictions on Miller indices and, therefore, can resolve this issue.

Generate the interface structure

A graph-based approach is used to generate interface structures, as shown in Figure 5A. First, the supercell is reshaped by the way that edges a and b are along with the lattice-matched basis vectors, while edge c is chosen so that it is shortest in periodicity. In this reshaped supercell, (001) of both phases build the lattice-matched interface. Then, we recognize the bonds in the crystals by combining the criteria of covalent radii of elements[69] and the Voronoi partition[70,71]. The covalent radii of elements are used to check whether two atoms bond together and the Voronoi partition is adopted to delete unreasonable bonds in geometry.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 5. Flowchart for generating the atomic structures of interfaces. The red lines in (A) represent the lattice-matched interface determined by PTMC; The purple cycles in (B) the source and sink nodes; The red cycles in (B) the terminal nodes; The blue dashed line in (D) the interface. Reproduced with permission[4]. Copyright 2022, American Physical Society. PTMC: Phenomenological theory of Martensitic crystallography.

Second, we convert the crystal to a special direct graph, the so-called “network flow”. Specifically, we remove the bonds that cut through the (001) plane and convert the remaining structure to a graph, where the atoms correspond to the nodes and the bonds correspond to the edges. We then add a new edge for each terminal node (see red cycles in Figure 5B) to a virtual source or sink node (see purple cycles in Figure 5B). Then, we set the capacity to infinity for the edges connected to the source or sink nodes and the capacity to 1 for other edges.

Third, we use the max-flow min-cut theorem to generate two stable slab models from bulk crystals by minimizing the edges that need to be cut to separate a network flow in half[72,73] [Figure 5C]. The orientations of the c-axis for two slabs are perpendicular to the a-b plane.

Finally, we glue two slab models in a single unit cell [Figure 5D]. At this point, the interface already fulfills the lattice-matching requirements. In this configuration, only the translational degrees of freedom of the model are required to be considered. To improve the quality of the interface, a series of rigid translations are performed on the top phases to optimize the alignment between two phases by maximizing the number of bonds at one interface. The generation procedure of interface structure stops here if the type II model is used. If the type II model is used, the orientation of the c-axis needs to be further modified to optimize the alignment between two phases at another interface.

Variable-composition global optimization

To identify the interfacial structure, a variable-composition grand canonical Monte Carlo (GCMC)/SSW-neural network (SSW-NN) global optimization is performed. GCMC/SSW-NN method combines the SSW method, GCMC, and NN potential energy surface (PES). SSW is a global optimization method for molecules, clusters, and periodic crystals[4,12,39,74]. This method utilizes an automated climbing mechanism that smoothly manipulates the structural configuration from the local minima to a high-energy configuration along a random mode direction [Figure 6A]. Each step of the SSW method consists of three distinct components, namely hill climbing along the PES, optimization, and Metropolis Monte Carlo selection to determine whether to accept or reject the configuration. The key of the SSW method for global PES search is the modification of PES Vmod by continuously adding Gaussian-type bias potential nn along the softening mode direction Nt, which helps to overcome the barrier between minima, as shown in Eq. (14):

$$ V_{\text {mod }}=V_{\text {real }}+\sum_{n=1}^{N G} v_{n}=V_{\text {real }}+\sum_{n=1}^{N G} w_{n} \times exp -\left[\left(R_{t}-R_{t}^{n}\right) \cdot \mathrm{N}_{t}^{n}\right]^{2} /\left(2 \times d s^{2}\right) $$

Recent advances in the interface structure prediction for heteromaterial systems

Figure 6. (A) Scheme of the SSW-NN method; (B) Scheme for the Behler-Parrinello NN architecture. Reproduced with permission[43]. Copyright 2017, Royal Society of Chemistry; (C) Procedure for generating the training dataset using SSW global optimization. NN: Neural network; SSW: stochastic surface walking.

where the subscript of superscript n is the index of the sequentially added bias potential, and Rt is the coordination of the current structure; Vreal is the unmodified PES; $$ R_{t}^{n} $$ is the coordination of references when vn is added.

The composition at the interface may be different from the bulk phase. To reveal the composition at the interface, SSW is combined with GCMC[4,75]. The GCMC involves randomly adding or removing the specific atoms at the interface to accommodate the variable number of atoms. During the GCMC move, atoms in the simulation supercell are exchanged with the environment, which is constrained by the chemical potential of the exchangeable species. The free energy of the surface system is defined as G(N1, N2, …Nk), where G is the Gibbs free energy of the system, and k represents the index of species. The chemical potential of species i (mi) is determined by Eq. (15):

$$ \mu_{i}=\frac{\partial G}{\partial N_{i}} $$

In the grand canonical ensemble, the equilibrium is reached between the chemical potential of each species (mi) and the chemical potential of the environment (mext), where the average difference {mext - mi,ext} approaches zero. The Metropolis Monte Carlo algorithm is employed to minimize the difference in chemical potential. The algorithm selects the state based on the associated probability (P), ensuring the system achieves the most probable state concerning the chemical potential.

$$ P=\left\{\begin{array}{cl} 1 & \text { if } \mu_{i} \leq \mu_{i, \text { ext }} \\ \exp \left(-\frac{\mu_{i}-\mu_{i, e x t}}{k T}\right) & \text { if } \mu_{i}>\mu_{i, \text { ext }} \end{array}\right. $$

Finally, to speed up the GCMC/SSW global optimization, a NN PES[43,76] is constructed. The overall computational cost for NN PES is roughly 103 to 104 times lower than that using DFT[43,76]. The details of building the NN PES are given in the next section, “Neural Network Potential Energy Surface”, and also in the previous literature[31,32,43,75-81].

NEURAL NETWORK POTENTIAL ENERGY SURFACE

NN PES represents the atomic interaction energy as a function of the locally constructed structure descriptor by parameterizing it into a NN. Various ML potentials, such as NN potentials [e.g., Behler-Parrinello NN[82,83], Schnet[84], and crystal graph convolutional NN (CGCNN)[85]] and Gaussian approximation potential (GAP)[86], have been formulated and rigorously tested for atomistic system modeling. Among them, the Behler-Parrinello NN potential[82,83] using atom-centered symmetry function descriptors may be the most popular one. So, here we focus on the Behler-Parrinello NN potential, while other ML potentials can be found in the literature[38,87,88].

Behler-Parrinello NN potential[82,83] involves a high-dimensional NN potential (depicted in Figure 6B) that decomposes the total energy into the sum of individual atomic energy, as shown in Eq. (17).

$$ E=\sum_{i} E_{i} $$

where Ei for each atom is the output of a standard feed-forward NN. The atomic bonding environment is described using a set of structural descriptors, which serve as the input for the high-dimensional NN potential. The NN parameters are trained by a data set using first-principles calculations. The SSW trajectory is smooth, allowing the structure to be used for constructing the NN PES[31,32,43,44]. The input layer of the NN employs power-type structure descriptors (PTSD), which can well describe the geometrical environment of atoms. There are six types of PTSDs, namely S1-S6:

$$ f_{c}\left(R_{i j}\right)=\left\{\begin{array}{c} 0.5 \times \tanh ^{3}\left[1-\frac{r_{i j}}{r_{c}}\right], \text { for } r_{i j} \leq r_{c} \\ 0, \text { for } r_{i j}>r_{c} \end{array}\right. $$

$$ R^{n}\left(r_{i j}\right)=r_{i j}^{n} \cdot \mathrm{f}_{c}\left(r_{i j}\right) $$

$$ S_{i}^{1}=\sum_{j \neq i} R^{n}\left(r_{i j}\right) $$

$$ S_{i}^{2}=\left[\sum_{m=-L}^{L}\left|\sum G U_{2}\right|^{2}\right]^{\frac{1}{2}}=\left[\sum_{m=-L}^{L}\left|\sum_{j \neq i} R^{n}\left(r_{i j}\right) Y_{L m}\left(r_{i j}\right)\right|^{2}\right]^{\frac{1}{2}} $$

$$ S_{i}^{3}=2^{1-\zeta} \sum G U_{3}=2^{1-\zeta} \sum_{j, k \neq i}\left(1+\lambda \cos \theta_{i j k}\right)^{\zeta} \cdot R^{n}\left(r_{i j}\right) \cdot R^{m}\left(r_{i k}\right) $$

$$ S_{i}^{4}=2^{1-\zeta} \sum G U_{4}=2^{1-\zeta} \sum_{j, k \neq i}\left(1+\lambda \cos \theta_{i j k}\right)^{\zeta} \cdot R^{n}\left(r_{i j}\right) \cdot R^{m}\left(r_{i k}\right) \cdot R^{p}\left(r_{i k}\right) $$

$$ S_{i}^{5}=\left[\sum_{m=-L}^{L}\left|\sum G U_{5}\right|^{2}\right]^{\frac{1}{2}}=\left[\sum_{m=-L}^{L} \mid \sum_{j, k \neq i} R^{n}\left(r_{i j}\right) \cdot R^{m}\left(r_{i k}\right) \cdot R^{p}\left(r_{i k}\right) \cdot Y_{L m}\left(r_{i j}\right)+\left.Y_{L m}\left(r_{i k}\right)\right|^{2}\right]^{\frac{1}{2}}. $$

$$ S_{i}^{6}=2^{1-\zeta} \sum G U_{6}=2^{1-\zeta} \sum_{j, k, l \neq i}\left(1+\lambda \cos \theta_{i j k}\right)^{\zeta} \cdot R^{n}\left(r_{i j}\right) \cdot R^{m}\left(r_{i k}\right) \cdot R^{p}\left(r_{i l}\right) $$

Each PTSD can be considered as a sum of the n-body functions, named the group unit (GU). The power function Rn(rij) represents the radial function in the GU. In the equations, rij is the distance of atomic pairs, rc is the cutoff, beyond which the value of Eq. 16 is equal to zero, YLm(rik) is the spherical harmonic function, and n, m, p, λ, and ζ are power parameters. Depending on the functional form of GU, the PTSDs, Si1, Si2, Si3, Si4, Si5, Si1, and Si6, can be considered as 2-body, 3-body, and 4-body functions, respectively, where the Si2 and Si5 also involve spherical function.

The training of NN PES involves the following six steps [Figure 6C]:
(a) Generating the global dataset by computing the selected structures from the SSW trajectories using DFT.
(b) Pretraining the NN PES with the global dataset.
(c) Benchmarking the accuracy between the current NN PES and DFT PES for a few randomly selected structures from a trial SSW calculation. The structures with large errors are added to the dataset of NN PES.
(d) Iteratively performing (i)-(iii) steps until the NN PES deviation is low enough. The RMSE for NN potential is typically 5~10 meV/atom for energy and 0.1~0.2 eV/Å for force.
(e) Performing the SSW global optimization on the NN PES for the target problem.
(f) Recomputing the energy of key structures with DFT calculations.

The NN PES trained using the above approach are given in the LASP[44,45] (Large-scale Atomic Simulation with Neural Network Potential) website: www.lasphub.com.

APPLICATIONS OF ML-INTERFACE METHOD

Field-effect transistor

The FET is a core component of semiconductor chips, and its size determines the density of chips. The ultimate physical size of FET transistors largely depends on the structure and performance of the Si/SiO2 interface at the gate. Currently, the gate length of FET has reached a few nanometers and is approaching the physical limits of silicon materials. The leakage current caused by quantum tunneling has become a key technological challenge. Therefore, it is crucial to fully consider the influence of quantum effects in the design of modern transistors, which requires a high level of understanding of the Si/SiO2 interface structure and quantum tunneling performance.

We first train a Si-Hf-O-H NN PES using the method in Section “NEURAL NETWORK POTENTIAL ENERGY SURFACE”. A large set of PTSD was adopted for each element, including 218 two-body ones, 200 three-body ones, and 57 four-body ones. The RMSE for energy, force, and stress are 3.992 meV/atom, 0.139 eV/Å, and 1.601 GPa, respectively.

Then, using the ML-interface method, we can now reveal the atomic structures for Si/SiO2 interfaces. Ten Si/SiO2 interfaces with short periodicity were identified[4] [Figure 7]. Among the ten interfaces, the Si(100) interface has the lowest interfacial energy of 0.93 J/m2, and the high Miller index Si(331) interface has the highest interfacial energy of 1.40 Jm2. Notably, our identified Si/SiO2 interface is more stable than interfacial structures proposed previously, including crystalline and amorphous ones [Table 1]. By examining the interface structures, we find no interfacial dangling bonds are present on the Si(100), (111), (511), and (531) faces, but Si(110), (559), (210), (211), (311), and (331) interfaces do have a number of dangling bonds. This result is consistent with the interface state densities that can be directly measured by H2 passivation in the experiment. To be specific, Ogata et al. reported that the interface state densities on Si(100) and (111) have no change after H2 passivation, indicating no dangling bonds are present on these two interfaces[91]. In contrast, the interface state densities on Si(110), (210), (311), and (331) remarkably decrease by up to 1 order after the same treatment[91]. The experimental observations confirm the reliability of the ML-interface method.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 7. Atomic structures of ten Si/SiO2 interfaces with an interfacial area less than 1 nm2 in periodicity. The dashed cycles in the figure highlight the unsatisfied Si atoms with dangling bonds. Colors in the figure: yellow balls are Si; red balls are O. This figure is quoted with permission from Li et al.[4].

Table 1

Interface energies of Si/SiO2 in our and previous worke

Our workPrevious work
Orientationγ (J/m2)Orientationγ (J/m2)
Low-index Si/SiO2 interface
(100)Si//(111)α-crist0.93Si(100)/α-quartz0.99a
(111)Si//(111)α-crist1.13Si(100)/α-SiO2d1.15b
(110)Si//(100)α-quartz1.27Si(110)/α-quartz1.83c
High-index Si/SiO2 interface
(531)Si//(010)β-tridymite1.01α-SiO2/Si(331)1.50b
(210)Si//(102)β-tridymite1.16α-SiO2/Si(310)1.66b
(211)Si//($$ 1 \overline{1} 2 $$)α-crist1.17α-SiO2/Si(410)1.72b
(511)Si//($$ 21 \overline{1} $$)α-quartz1.20--
(311)Si//($$ 1 \overline{2} 0 $$)β-tridymite1.23--
(559)Si//(121)α-quartz1.27--
(331)Si//(010)α-crist1.40--

For the carrier effective mass, the low-index Si facet Si(100) has the lowest electron effective mass (me* = 0.23me), while Si(110) has the lowest hole effective mass (mh* = 0.32me). Both agree with the established knowledge from the experiment[92]. In fact, both interfaces have been used in the semiconductor industry. Besides, two new high Miller index interfaces, i.e., Si(210)//SiO2 and Si(211)//SiO2, were proposed for the first time. These high-index interfaces exhibit perfect atomic matching, excellent thermal stability, and superior electronic properties. Theoretical calculations suggest a significant reduction in carrier quantum tunneling by four orders of magnitude compared to conventional Si(100)//SiO2 and Si(110)//SiO2 interfaces. Additionally, the interface size can be reduced to as small as 1 nanometer, which holds promise for achieving shorter gate lengths. Furthermore, the predicted interface models in the paper can be used to generate a series of crucial physical parameters, such as effective mass, conductivity, and thermal conductivity. These parameters can accelerate the macroscopic device simulation (TCAD) for chip design. Therefore, the theoretical prediction indicates that the new high Miller index Si(210)//SiO2 and Si(211)//SiO2 interfaces could be the key to overcoming the performance bottleneck of Si-based semiconductors.

Photocatalysis and photovoltaics

In photocatalysis and photovoltaics, semiconductor interfaces play a crucial role in carrier separation, photocatalytic activity, and photovoltaic conversion efficiency. The ML-interface method can effectively address these issues. For example, Li conducted a study on the interface structure between ZnO and MAPbI3. Figure 8A illustrates the atomic structures of four interfaces with relatively low interfacial energies. By analyzing the interface energy, Li predicted the equilibrium morphology of ZnO nanoparticles on the MAPbI3 substrate[93]. The results revealed that under equilibrium conditions, the ZnO nanoparticles exhibit loose assembly over the MAPbI3 substrate. This finding has implications for the design of photovoltaic solar cells using MAPbI3 and ZnO, suggesting that the issue of loose assembly needs to be addressed.

Recent advances in the interface structure prediction for heteromaterial systems

Figure 8. (A) Atomic structures of four ZnO/MAPbI3 interfaces. Colors in (A): red balls are O; silver balls are Zn; cyan balls are N; brown balls are C; purple balls are H; white balls are I, and gray balls are Pb. Reproduced with permission[93]. Copyright 2019, American Chemical Society; (B) Structures for three TiO2/GaP heterojunctions that can achieve the atomic-matched interface. Colors in (B): grey balls are Ti; red balls are O; brown balls are Ga; purple balls are P. The green dashed lines represent the interfacial planes. Reproduced with permission[12]. Copyright 2019, American Chemical Society.

As another example, Li et al. screened out three highly matched GaP/TiO2 interfaces with a strain of less than 7%, i.e., $$ (001)_{\mathrm{TiO}^{2}} $$//(100)GaP, $$ (101)_{\mathrm{TiO}^{2}} $$//(110)GaP, and $$ (112)_{\mathrm{TiO}^{2}} $$//(100)GaP [Figure 8B][12]. The most stable interface is the $$ (001)_{\mathrm{TiO}^{2}} $$//(100)GaP interface, with lattice parameters of 3.78 × 3.78 Å for TiO2 (001) and 3.86 × 3.86 Å for GaP(100), resulting in a strain of 2%. The interface structure shows that each interfacial Ti ion forms bonds with two P ions from the GaP phase, while each interfacial Ga ion forms bonds with three O ions from the TiO2 phase. The CN for the interfacial Ti and Ga ions is five, which differs from their respective bulk counterparts. In the bulk TiO2, Ti ions have a CN of six, while in the bulk GaP, Ga ions have a CN of four. Despite this difference, the structural parameters, such as bond length and angle of the interfacial Ti and Ga ions closely, resemble those in the bulk, facilitating a perfectly matched interface between TiO2 and GaP phases. The second stable interface is the $$ (001)_{\mathrm{TiO}^{2}} $$//(110)GaP interface, with an interfacial energy of 0.76 J/m2. The lattice parameters for TiO2 (101) and GaP (110) are 3.78 × 10.21 Å and 3.85 × 10.90 Å, resulting in a strain of 7%. In this interface, one Ti ion forms a bond with one P ion from the GaP matrix, while one Ga ion forms a bond with one O ion from the TiO2 phase. The third stable interface is the $$ (112)_{\mathrm{TiO}^{2}} $$//(100)GaP interface, with an interfacial energy of 0.83 J/m2. The lattice parameters for TiO2 and GaP are 5.44 × 5.34 Å and 5.45 × 5.45 Å, respectively, resulting in a strain of 2%. The local view reveals that each interfacial Ti ion bonds with two P ions from the GaP phase, while each interfacial Ga ion bonds with two anions from the TiO2 phase.

Furthermore, using the transfer-matrix method (an approach to estimate the probability of carrier tunneling), we demonstrate that a photoelectron generated from GaP bulk can readily tunnel through the thin TiO2 layer (0.5-3 nm) without significant decay, which can decrease the CO2 reduction barrier significantly by 1.02 eV as compared to that without electron tunneling. Our results rationalize the recent finding of the high CO2 reduction ability of TiO2-coated GaP at a critical thickness (< 10 nm)[12].

CONCLUSION AND OUTLOOK

This review outlines recent advances in the interface structure prediction method, which shed light on resolving the long-standing challenges in material science. Most interface structure prediction methods share a common idea, that is, first, find the lattice-matched OR and, second, perform global optimization to relax the interface atoms. The specific research process can be flexible in choosing which method to use according to the characteristics of the interface system. Due to the complexity and diversity of interface structure prediction, most methods have not been integrated into common codes, such as VASP[94], CP2K[95], and CRYTSAL[96].

Although some progress has been made, there are some common problems in this field. First, during the search for lattice-matched ORs, a large number of lattice correspondences need to be exhausted. The current approach will generate numerous lattice correspondences that generate the duplicate deformation gradient matrix and thereby lead to the same ORs. In the future, a more efficient sampling method for lattice correspondences is required. Second, many interface prediction methods require global optimization to determine the most stable structure for interface atoms; this, however, is computationally expensive, especially using first-principles calculations. We expect ML potentials will be widely used in future studies to notably reduce computational costs.

Notably, the training of ML potential is difficult and requires a lot of computational costs. Furthermore, the complexity of the PESs increases exponentially with the number of elements, as ML potentials are element-dependent. As a result, a single ML potential contains at most seven elements in the current implementation, while no global potential covers all the elements in the periodic table available. As a result, many ML potentials containing different subsets of elements have to be trained, which is inconvenient in practical applications. A few ML techniques, such as a pre-trained ML model[97] or coarse-k-grid DFT calculations to generate training sets, emerge as possible approaches to resolve the challenges in current ML potentials[98]. Next, it may be useful to build a generative ML model using the available interface structure data from the interface structure prediction methods. Overall, the coming years might witness a boom in theoretical simulations of solid-solid interface structures due to the advances in the methodology.

DECLARATIONS

Author’s contributions

Manuscript draft, conception, and design of this review: Li JL, Li YF

Collected references and revised the manuscript: Li JL, Li YF

Provided supervision and secured funding: Li YF

Availability of data and materials

Not applicable.

Financial support and sponsorship

This work was supported by the National Science Foundation of China (21972023).

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2023.

REFERENCES

1. Hilgenkamp H, Mannhart J. Grain boundaries in high-Tc superconductors. Rev Mod Phys 2002;74:485-549.

2. Dillon SJ, Tang M, Carter WC, Harmer MP. Complexion: a new concept for kinetic engineering in materials science. Acta Mater 2007;55:6208-18.

3. Robertson J. High dielectric constant gate oxides for metal oxide Si transistors. Rep Prog Phys 2006;69:327-96.

4. Li YF, Liu ZP. Smallest stable Si/SiO2 interface that suppresses quantum tunneling from machine-learning-based global search. Phys Rev Lett 2022;128:226102.

5. Franklin AD. Nanomaterials in transistors: from high-performance to thin-film applications. Science 2015;349:aab2750.

6. Buczko R, Pennycook SJ, Pantelides ST. Bonding arrangements at the Si-SiO2 and SiC-SiO2 interfaces and a possible origin of their contrasting properties. Phys Rev Lett 2000;84:943-6.

7. Tu Y, Tersoff J. Structure and energetics of the Si- SiO2 interface. Phys Rev Lett 2000;84:4393-6.

8. Tang H, Deng Z, Lin Z, et al. Probing solid-solid interfacial reactions in all-solid-state sodium-ion batteries with first-principles calculations. Chem Mater 2018;30:163-73.

9. Han X, Gong Y, Fu KK, et al. Negating interfacial impedance in garnet-based solid-state Li metal batteries. Nat Mater 2017;16:572-9.

10. Ohta N, Takada K, Zhang L, Ma R, Osada M, Sasaki T. Enhancement of the high-rate capability of solid-state lithium batteries by nanoscale interfacial modification. Adv Mater 2006;18:2226-9.

11. Takada K, Ohta N, Zhang L, et al. Interfacial modification for high-power solid-state lithium batteries. Solid State Ion 2008;179:1333-7.

12. Li LF, Li YF, Liu ZP. CO2 photoreduction via quantum tunneling: thin TiO2-coated GaP with coherent interface to achieve electron tunneling. ACS Catal 2019;9:5668-78.

13. Chua AL, Benedek NA, Chen L, Finnis MW, Sutton AP. A genetic algorithm for predicting the structures of interfaces in multicomponent systems. Nat Mater 2010;9:418-22.

14. Zhang Z, Sigle W, Phillipp F, Rühle M. Direct atom-resolved imaging of oxides and their grain boundaries. Science 2003;302:846-9.

15. Sun R, Wang Z, Saito M, Shibata N, Ikuhara Y. Atomistic mechanisms of nonstoichiometry-induced twin boundary structural transformation in titanium dioxide. Nat Commun 2015;6:7120.

16. von Alfthan S, Haynes PD, Kaski K, Sutton AP. Are the structures of twist grain boundaries in silicon ordered at 0 K? Phys Rev Lett 2006;96:055505.

17. Zhang J, Wang CZ, Ho KM. Finding the low-energy structures of Si[001] symmetric tilted grain boundaries with a genetic algorithm. Phys Rev B 2009;80:174102.

18. Benedek NA, Chua ALS, Elsässer C, Sutton AP, Finnis MW. Interatomic potentials for strontium titanate: an assessment of their transferability and comparison with density functional theory. Phys Rev B 2008;78:064110.

19. Peacock PW, Xiong K, Tse K, Robertson J. Bonding and interface states of Si:HfO2 and Si:ZrO2 interfaces. Phys Rev B 2006;73:075328.

20. Woicik JC, Shirley EL, Hellberg CS, et al. Ferroelectric distortion in SrTiO3 thin films on Si(001) by x-ray absorption fine structure spectroscopy: Experiment and first-principles calculations. Phys Rev B 2007;75:140103.

21. Barcaro G, Sementa L, Negreiros FR, Thomas IO, Vajda S, Fortunelli A. Atomistic and electronic structure methods for nanostructured oxide interfaces. In: Netzer FP, Fortunelli A, editors. Oxide materials at the two-dimensional limit. Cham: Springer International Publishing; 2016. p. 39-90.

22. Wales DJ, Doye JPK. Global optimization by basin-hopping and the lowest energy structures of lennard-jones clusters containing up to 110 atoms. J Phys Chem A 1997;101:5111-6.

23. Zhao X, Shu Q, Nguyen MC, et al. Interface structure prediction from first-principles. J Phys Chem C 2014;118:9524-30.

24. Dekkers A, Aarts E. Global optimization and simulated annealing. Math Program 1991;50:367-93.

25. Romeijn HE, Smith RL. Simulated annealing for constrained global optimization. J Glob Optim 1994;5:101-26.

26. Wang Y, Lv J, Zhu L, Ma Y. CALYPSO: a method for crystal structure prediction. Comput Phys Commun 2012;183:2063-70.

27. Wang H, Wang Y, Lv J, Li Q, Zhang L, Ma Y. CALYPSO structure prediction method and its wide application. Comput Mater Sci 2016;112:406-15.

28. Li ZL, Li ZM, Cao HY, et al. What are grain boundary structures in graphene? Nanoscale 2014;6:4309-15.

29. Schusteritsch G, Pickard CJ. Predicting interface structures: from SrTiO3 to graphene. Phys Rev B 2014;90:035424.

30. Zhu Q, Samanta A, Li B, Rudd RE, Frolov T. Predicting phase behavior of grain boundaries with evolutionary search and machine learning. Nat Commun 2018;9:467.

31. Shang C, Liu ZP. Stochastic surface walking method for structure prediction and pathway searching. J Chem Theory Comput 2013;9:1838-45.

32. Shang C, Zhang XJ, Liu ZP. Stochastic surface walking method for crystal structure and phase transition pathway prediction. Phys Chem Chem Phys 2014;16:17845-56.

33. Ma S, Liu ZP. Machine learning for atomic simulation and activity prediction in heterogeneous catalysis: current status and future. ACS Catal 2020;10:13213-26.

34. Zur A, Mcgill TC. Lattice match: an application to heteroepitaxy. J Appl Phys 1984;55:378-86.

35. Gao B, Gao P, Lu S, Lv J, Wang Y, Ma Y. Interface structure prediction via CALYPSO method. Sci Bull 2019;64:301-9.

36. Wang Y, Lv J, Zhu L, et al. Materials discovery via CALYPSO methodology. J Phys Condens Matter 2015;27:203203.

37. Shi Y, Song B, Shahbazian-Yassar R, Zhao J, Saidi WA. Experimentally validated structures of supported metal nanoclusters on MoS2. J Phys Chem Lett 2018;9:2972-8.

38. Tong Q, Xue L, Lv J, Wang Y, Ma Y. Accelerating CALYPSO structure prediction by data-driven learning of a potential energy surface. Faraday Discuss 2018;211:31-43.

39. Zhao WN, Zhu SC, Li YF, Liu ZP. Three-phase junction for modulating electron-hole migration in anatase-rutile photocatalysts. Chem Sci 2015;6:3483-94.

40. Zhu ZY, Li YF, Shang C, Liu ZP. Thermodynamics and catalytic activity of ruthenium oxides grown on ruthenium metal from a machine learning atomic simulation. J Phys Chem C 2021;125:17088-96.

41. Hu YF, Li YF, Liu ZP. Structural origin for efficient photoelectrochemical water splitting over Fe-modified BiVO4. ACS Catal 2023;13:10167-76.

42. Hao Y, Kang Y, Wang S, et al. Electrode/electrolyte synergy for concerted promotion of electron and proton transfers toward efficient neutral water oxidation. Angew Chem Int Ed 2023;62:e202303200.

43. Huang SD, Shang C, Zhang XJ, Liu ZP. Material discovery by combining stochastic surface walking global optimization with a neural network. Chem Sci 2017;8:6327-37.

44. Huang SD, Shang C, Kang PL, Zhang XJ, Liu ZP. LASP: Fast global potential energy surface exploration. WIREs Comput Mol Sci 2019;9:e1415.

45. Kang P, Shang C, Liu Z. Recent implementations in LASP 3.0: Global neural network potential with multiple elements and better long-range description. Chin J Chem Phys 2021;34:583-90.

46. Bilodeau C, Jin W, Jaakkola T, Barzilay R, Jensen KF. Generative models for molecular discovery: recent advances and challenges. WIREs Comput Mol Sci 2022;12:e1608.

47. Chen L, Zhang W, Nie Z, Li S, Pan F. Generative models for inverse design of inorganic solid materials. J Mater Inf 2021;1:4.

48. Noh J, Gu GH, Kim S, Jung Y. Machine-enabled inverse design of inorganic solid materials: promises and challenges. Chem Sci 2020;11:4871-81.

49. Kim S, Noh J, Gu GH, Aspuru-Guzik A, Jung Y. Generative adversarial networks for crystal structure prediction. ACS Cent Sci 2020;6:1412-20.

50. Nouira A, Sokolovska N, Crivello J-CJapa. Crystalgan: learning to discover crystallographic structures with generative adversarial networks. arXiv. [Preprint.] May 25, 2019 [accessed 2023 October 14]. Available from: https://arxiv.org/abs/1810.11203.

51. Lyngby P, Thygesen KS. Data-driven discovery of 2D materials by deep generative models. npj Comput Mater 2022;8:232.

52. Gebauer NWA, Gastegger M, Hessmann SSP, Müller KR, Schütt KT. Inverse design of 3d molecular structures with conditional generative neural networks. Nat Commun 2022;13:973.

53. Moreno JJG, Nolan M. Ab initio study of the atomic level structure of the rutile TiO2(110)-titanium nitride (TiN) interface. ACS Appl Mater Interfaces 2017;9:38089-100.

54. Andrews LC, Bernstein HJ. The geometry of niggli reduction I: the boundary polytopes of the niggli cone. arXiv. [Preprint.] Aug 9, 2013 [accessed 2023 October 14]. Available from: https://arxiv.org/abs/1203.5146.

55. Andrews LC, Bernstein HJ. The geometry of niggli reduction II: BGAOL--embedding niggli reduction. arXiv. [Preprint.] May 28, 2013 [accessed 2023 October 14]. Available from: https://arxiv.org/abs/1305.6561.

56. McGill KJ, Asadi M, Karakasheva MT, Andrews LC, Bernstein HJ. The geometry of niggli reduction III: SAUC--search of alternate unit cells. arXiv. [Preprint.] Jul 9, 2013 [accessed 2023 October 14]. Available from: https://arxiv.org/abs/1307.1811.

57. Wang Y, Miao M, Lv J, et al. An effective structure prediction method for layered materials based on 2D particle swarm optimization algorithm. J Chem Phys 2012;137:224108.

58. Zhu L, Liu H, Pickard CJ, Zou G, Ma Y. Reactions of xenon with iron and nickel are predicted in the Earth’s inner core. Nat Chem 2014;6:644-8.

59. Lv J, Wang Y, Zhu L, Ma Y. Particle-swarm structure prediction on clusters. J Chem Phys 2012;137:084104.

60. Lu S, Wang Y, Liu H, Miao MS, Ma Y. Self-assembled ultrathin nanotubes on diamond (100) surface. Nat Commun 2014;5:3666.

61. Gao B, Shao X, Lv J, Wang Y, Ma Y. Structure prediction of atoms adsorbed on two-dimensional layer materials: method and applications. J Phys Chem C 2015;119:20111-8.

62. Bowles JS, Mackenzie JK. The crystallography of martensite transformations I. Acta Metall 1954;2:129-37.

63. Mackenzie JK, Bowles JS. The crystallography of martensite transformations II. Acta Metall 1954;2:138-47.

64. Wayman CM. The phenomenological theory of martensite crystallography: interrelationships. Metall Mater Trans A 1994;25:1787-95.

65. Krivy I, Gruber B. A unified algorithm for determining the reduced (Niggli) cell. Acta Crystallogr A 1976;32:297-8.

66. Gruber B. The relationship between reduced cells in a general Bravais lattice. Acta Crystallogr A 1973;29:433-40.

67. Grosse-Kunstleve RW, Sauter NK, Adams PD. Numerically stable algorithms for the computation of reduced unit cells. Acta Crystallogr A 2004;60:1-6.

68. Andrews LC, Bernstein HJ, Sauter NK. Selling reduction versus Niggli reduction for crystallographic lattices. Acta Crystallogr A Found Adv 2019;75:115-20.

69. Pyykkö P, Atsumi M. Molecular single-bond covalent radii for elements 1-118. Chemistry 2009;15:186-97.

70. Isayev O, Oses C, Toher C, Gossett E, Curtarolo S, Tropsha A. Universal fragment descriptors for predicting properties of inorganic crystals. Nat Commun 2017;8:15679.

71. Blatov VA. Voronoi-dirichlet polyhedra in crystal chemistry: theory and applications. Crystallogr Rev 2004;10:249-318.

72. Ford LR, Fulkerson DR. Maximal flow through a network. Can j math 1956;8:399-404.

73. Witman M, Ling S, Boyd P, et al. Cutting materials in half: a graph theory approach for generating crystal surfaces and its prediction of 2D zeolites. ACS Cent Sci 2018;4:235-45.

74. Li YF. First-principles simulations for morphology and structural evolutions of catalysts in oxygen evolution reaction. ChemSusChem 2019;12:1846-57.

75. Li JL, Li YF, Liu ZP. In situ structure of a Mo-doped Pt-Ni catalyst during electrochemical oxygen reduction resolved from machine learning-based grand canonical global optimization. JACS Au 2023;3:1162-75.

76. Huang SD, Shang C, Kang PL, Liu ZP. Atomic structure of boron resolved using machine learning and global sampling. Chem Sci 2018;9:8644-55.

77. Zhang XJ, Shang C, Liu ZP. From atoms to fullerene: stochastic surface walking solution for automated structure prediction of complex material. J Chem Theory Comput 2013;9:3252-60.

78. Ma S, Huang SD, Liu ZP. Dynamic coordination of cations and catalytic selectivity on zinc–chromium oxide alloys during syngas conversion. Nat Catal 2019;2:671-7.

79. Shi YF, Kang PL, Shang C, Liu ZP. Methanol synthesis from CO2/CO mixture on Cu-Zn catalysts from microkinetics-guided machine learning pathway search. J Am Chem Soc 2022;144:13401-14.

80. Liu QY, Shang C, Liu ZP. In situ active site for fe-catalyzed fischer-tropsch synthesis: recent progress and future challenges. J Phys Chem Lett 2022;13:3342-52.

81. Lin C, Li JL, Li X, et al. In-situ reconstructed Ru atom array on α-MnO2 with enhanced performance for acidic water oxidation. Nat Catal 2021;4:1012-23.

82. Behler J, Parrinello M. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys Rev Lett 2007;98:146401.

83. Behler J. Representing potential energy surfaces by high-dimensional neural network potentials. J Phys Condens Matter 2014;26:183001.

84. Schütt KT, Sauceda HE, Kindermans PJ, Tkatchenko A, Müller KR. SchNet - a deep learning architecture for molecules and materials. J Chem Phys 2018;148:241722.

85. Park CW, Wolverton C. Developing an improved crystal graph convolutional neural network framework for accelerated materials discovery. Phys Rev Mater 2020;4:063801.

86. Bartók AP, Payne MC, Kondor R, Csányi G. Gaussian approximation potentials: the accuracy of quantum mechanics, without the electrons. Phys Rev Lett 2010;104:136403.

87. Lu X, Xie Z, Wu X, Li M, Cai W. Hydrogen storage metal-organic framework classification models based on crystal graph convolutional neural networks. Chem Eng Sci 2022;259:117813.

88. Westermayr J, Gastegger M, Marquetand P. Combining SchNet and SHARC: the SchNarc machine learning approach for excited-state dynamics. J Phys Chem Lett 2020;11:3828-34.

89. Li H, Guo Y, Robertson J, Okuno Y. Ab-initio simulations of higher Miller index Si:SiO2 interfaces for fin field effect transistor and nanowire transistors. J Appl Phys 2016;119:054103.

90. Korkin A, Greer JC, Bersuker G, Karasiev VV, Bartlett RJ. Computational design of Si/SiO2 interfaces: Stress and strain on the atomic scale. Phys Rev B 2006;73:165312.

91. Ogata S, Ohno S, Tanaka M, Mori T, Horikawa T, Yasuda T. SiO2/Si interfaces on high-index surfaces: re-evaluation of trap densities and characterization of bonding structures. Appl Phys Lett 2011;98:092906.

92. Colinge JP, Quinn AJ, Floyd L, et al. Low-temperature electron mobility in Trigate SOI MOSFETs. IEEE Electron Device Lett 2006;27:120-2.

93. Li YF. First-principles prediction of the ZnO morphology in the perovskite solar cell. J Phys Chem C 2019;123:14164-72.

94. Kresse G, Furthmüller J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys Rev B Condens Matter 1996;54:11169-86.

95. Hutter J, Iannuzzi M, Schiffmann F, VandeVondele J. cp2k: atomistic simulations of condensed matter systems. WIREs Comput Mol Sci 2014;4:15-25.

96. Dovesi R, Erba A, Orlando R, et al. Quantum-mechanical condensed matter simulations with CRYSTAL. WIREs Comput Mol Sci 2018;8:e1360.

97. Andolina CM, Saidi WA. Highly transferable atomistic machine-learning potentials from curated and compact datasets across the periodic table. Digit Discov 2023;2:1070-7.

98. Bayerl D, Andolina CM, Dwaraknath S, Saidi WA. Convergence acceleration in machine learning potentials for atomistic simulations. Digit Discov 2022;1:61-9.

Cite This Article

Export citation file: BibTeX | RIS

OAE Style

Li JL, Li YF. Recent advances in the interface structure prediction for heteromaterial systems. J Mater Inf 2023;3:22. http://dx.doi.org/10.20517/jmi.2023.24

AMA Style

Li JL, Li YF. Recent advances in the interface structure prediction for heteromaterial systems. Journal of Materials Informatics. 2023; 3(4): 22. http://dx.doi.org/10.20517/jmi.2023.24

Chicago/Turabian Style

Li, Ji-Li, Ye-Fei Li. 2023. "Recent advances in the interface structure prediction for heteromaterial systems" Journal of Materials Informatics. 3, no.4: 22. http://dx.doi.org/10.20517/jmi.2023.24

ACS Style

Li, J.L.; Li Y.F. Recent advances in the interface structure prediction for heteromaterial systems. J. Mater. Inf. 2023, 3, 22. http://dx.doi.org/10.20517/jmi.2023.24

About This Article

Special Issue

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
428
Downloads
94
Citations
0
Comments
0
3

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Cite This Article 4 clicks
Like This Article 3 likes
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Journal of Materials Informatics
ISSN 2770-372X (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/