Factors that limit the size of the input and output of a neural network include memory requiremen... more Factors that limit the size of the input and output of a neural network include memory requirements for the network states/activations to compute gradients, as well as memory for the convolutional kernels or other weights. The memory restriction is especially limiting for applications where we want to learn how to map volumetric data to the desired output, such as video-to-video. Recently developed fully reversible neural networks enable gradient computations using storage of the network states for a couple of layers only. While this saves a tremendous amount of memory, it is the convolutional kernels that take up most memory if fully reversible networks contain multiple invertible pooling/coarsening layers. Invertible coarsening operators such as the orthogonal wavelet transform cause the number of channels to grow explosively. We address this issue by combining fully reversible networks with layers that contain the convolutional kernels in a compressed form directly. Specifically, we introduce a layer that has a symmetric block-lowrank structure. In spirit, this layer is similar to bottleneck and squeeze-and-expand structures. We contribute symmetry by construction, and a combination of notation and flattening of tensors allows us to interpret these network structures in linear algebraic fashion as a block-low-rank matrix in factorized form and observe various properties. A video segmentation example shows that we can train a network to segment the entire video in one go, which would not be possible, in terms of memory requirements, using nonreversible networks and previously proposed reversible networks.
We propose algorithms and software for computing projections onto the intersection of multiple co... more We propose algorithms and software for computing projections onto the intersection of multiple convex and non-convex constraint sets. The software package, called SetIntersectionProjection, is intended for the regularization of inverse problems in physical parameter estimation and image processing. The primary design criterion is working with multiple sets, which allows us to solve inverse problems with multiple pieces of prior knowledge. Our algorithms outperform the well known Dykstra's algorithm when individual sets are not easy to project onto because we exploit similarities between constraint sets. Other design choices that make the software fast and practical to use, include recently developed automatic selection methods for auxiliary algorithm parameters, fine and coarse grained parallelism, and a multilevel acceleration scheme. We provide implementation details and examples that show how the software can be used to regularize inverse problems. Results show that we benefit from working with all available prior information and are not limited to one or two regularizers because of algorithmic, computational, or hyper-parameter selection issues.
Many works on inverse problems in the imaging sciences consider regularization via one or more pe... more Many works on inverse problems in the imaging sciences consider regularization via one or more penalty functions or constraint sets. When the models/images are not easily described using one or a few penalty functions/constraints, additive model descriptions for regularization lead to better imaging results. These include cartoon-texture decomposition, morphological component analysis, and robust principal component analysis; methods that typically rely on penalty functions. We propose a regularization framework, based on the Minkowski set, that merges the strengths of additive models and constrained formulations. We generalize the Minkowski set, such that the model parameters are the sum of two components, each of which is constrained to an intersection of sets. Furthermore, the sum of the components is also an element of another intersection of sets. These generalizations allow us to include multiple pieces of prior knowledge on each of the components, as well as on the sum of components, which is necessary to ensure physical feasibility of partial-differential-equation based parameters estimation problems. We derive the projection operation onto the generalized Minkowski sets and construct an algorithm based on the alternating direction method of multipliers. We illustrate how we benefit from using more prior knowledge in the form of the generalized Minkowski set using seismic waveform inversion and video background-anomaly separation.
Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vis... more Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vision tasks. However, their application to problems with high dimensional input and output, such as high-resolution image and video segmentation or 3D medical imaging, has been limited by various factors. Primarily, in the training stage, it is necessary to store network activations for back propagation. In these settings, the memory requirements associated with storing activations can exceed what is feasible with current hardware, especially for problems in 3D. Motivated by the propagation of signals over physical networks, that are governed by the hyperbolic Telegraph equation, in this work we introduce a fully conservative hyperbolic network for problems with high dimensional input and output. We introduce a coarsening operation that allows completely reversible CNNs by using a learnable Discrete Wavelet Transform and its inverse to both coarsen and interpolate the network state and change the number of channels. We show that fully reversible networks are able to achieve results comparable to the state of the art in 4D time-lapse hyper spectral image segmentation and full 3D video segmentation, with a much lower memory footprint that is a constant independent of the network depth. We also extend the use of such networks to Variational Auto Encoders with high resolution input and output.
In this paper we make a comparison between wave-equation based inversions based on the adjoint-st... more In this paper we make a comparison between wave-equation based inversions based on the adjoint-state and penalty methods. While the adjoint-state method involves the minimization of a data-misfit and exact solutions of the wave-equation for the current velocity model, the penaltymethod aims to first find a wavefield that jointly fits the data and honours the physics, in a leastsquares sense. Given this reconstructed wavefield, which is a proxy for the true wavefield in the true model, we calculate updates for the velocity model. Aside from being less nonlinear-the acoustic wave equation is linear in the wavefield and model parameters but not in both-the inversion is carried out over a solution space that includes both the model and the wavefield. This larger search space allows the algortihm to circumnavigate local minima, very much in the same way as recently proposed model extentions try to acomplish. We include examples for low frequencies, where we compare full-waveform inversion results for both methods, for good and bad starting models, and for high frequencies where we compare reverse-time migration with linearized imaging based on wavefield-reconstruction inversion. The examples confirm the expected benefits of the proposed method.
The large spatial/frequency scale of hyperspectral and airborne magnetic and gravitational data c... more The large spatial/frequency scale of hyperspectral and airborne magnetic and gravitational data causes memory issues when using convolutional neural networks for (sub-) surface characterization. Recently developed fully reversible networks can mostly avoid memory limitations by virtue of having a low and fixed memory requirement for storing network states, as opposed to the typical linear memory growth with depth. Fully reversible networks enable the training of deep neural networks that take in entire data volumes, and create semantic segmentations in one go. This approach avoids the need to work in small patches or map a data patch to the class of just the central pixel. The cross-entropy loss function requires small modifications to work in conjunction with a fully reversible network and learn from sparsely sampled labels without ever seeing fully labeled ground truth. We show examples from land-use change detection from hyperspectral time-lapse data, and regional aquifer mapping from airborne geophysical and geological data.
Geologic interpretation of large seismic stacked or migrated seismic images can be a time-consumi... more Geologic interpretation of large seismic stacked or migrated seismic images can be a time-consuming task for seismic interpreters. Neural network based semantic segmentation provides fast and automatic interpretations, provided a sufficient number of example interpretations are available. Networks that map from image-to-image emerged recently as powerful tools for automatic segmentation, but standard implementations require fully interpreted examples. Generating training labels for large images manually is time consuming. We introduce a partial loss-function and labeling strategies such that networks can learn from partially interpreted seismic images. This strategy requires only a small number of annotated pixels per seismic image. Tests on seismic images and interpretation information from the Sea of Ireland show that we obtain high-quality predicted interpretations from a small number of large seismic images. The combination of a partial-loss function, a multi-resolution network that explicitly takes small and large-scale geological features into account, and new labeling strategies make neural networks a more practical tool for automatic seismic interpretation.
SEG Technical Program Expanded Abstracts 2019, 2019
Recent years saw a surge of interest in seismic waveform inversion approaches based on quadratic-... more Recent years saw a surge of interest in seismic waveform inversion approaches based on quadratic-penalty or augmented-Lagrangian methods, including Wavefield Reconstruction Inversion. These methods typically need to solve a least-squares sub-problem that contains a discretization of the Helmholtz equation. Memory requirements for direct solvers are often prohibitively large in three dimensions, and this limited the examples in the literature to two dimensions. We present an algorithm that uses iterative Helmholtz solvers as a blackbox to solve the least-squares problem corresponding to 3D grids. This algorithm enables Wavefield Reconstruction Inversion and related formulations, in three dimensions. Our new algorithm also includes a root-finding method to convert a penalty into a constraint on the data-misfit without additional computational cost, by reusing precomputed quantities. Numerical experiments show that the cost of parallel communication and other computations are small compared to the main cost of solving one Helmholtz problem per source and one per receiver.
SEG Technical Program Expanded Abstracts 2019, 2019
Geological interpretation of seismic images is a visual task that can be automated by training ne... more Geological interpretation of seismic images is a visual task that can be automated by training neural networks. While neural networks have shown to be effective at various interpretation tasks, a fundamental challenge is the lack of labeled data points in the subsurface. For example, the interpolation and extrapolation of well-based lithology using seismic images relies on a small number of known labels. Besides well-known data augmentation techniques, as well as regularization of the network output, we propose and test another approach to deal with the lack of labels. Non learning-based horizon trackers work very well in the shallow subsurface where seismic images are of higher quality and the geological units are roughly layered. We test if these segmented and shallow units can help train neural networks to predict deeper geological units that are not layered and flat. We show that knowledge of shallow geological units helps to predict deeper units when there are only a few labels for training using a dataset from the Sea of Ireland. We employ U-net based multi-resolution networks, and we show that these networks can be described using matrix-vector product notation in a similar fashion as standard geophysical inverse problems.
There has been a surge of interest in neural networks for the interpretation of seismic images ov... more There has been a surge of interest in neural networks for the interpretation of seismic images over the last few years. Network-based learning methods can provide fast and accurate automatic interpretation, provided that there are many training labels. We provide an introduction to the field for geophysicists who are familiar with the framework of forward modeling and inversion. We explain the similarities and differences between deep networks and other geophysical inverse problems and show their utility in solving problems such as lithology interpolation between wells, horizon tracking, and segmentation of seismic images. The benefits of our approach are demonstrated on field data from the Sea of Ireland and the North Sea.
Detecting a specific horizon in seismic images is a valuable tool for geologic interpretation. Be... more Detecting a specific horizon in seismic images is a valuable tool for geologic interpretation. Because hand picking the locations of the horizon is a time-consuming process, automated computational methods were developed starting three decades ago. Until now, most networks have been trained on data that were created by cutting larger seismic images into many small patches. This limits the networks ability to learn from large-scale geologic structures. Moreover, currently available networks and training strategies require label patches that have full and continuous horizon picks (annotations), which are also time-consuming to generate. We have developed a projected loss function that enables training on labels with just a few annotated pixels and has no issue with the other unknown label pixels. We use this loss function for training convolutional networks with a multiresolution structure, including variants of the U-net. Our networks learn from a small number of large seismic images...
Full-waveform inversion is challenging in complex geologic areas. Even when provided with an accu... more Full-waveform inversion is challenging in complex geologic areas. Even when provided with an accurate starting model, the inversion algorithms often struggle to update the velocity model. Compared with other areas in applied geophysics, including prior information in full-waveform inversion is still in its relative infancy. In part, this is due to the fact that it is difficult to incorporate prior information that relates to geologic settings where strong discontinuities in the velocity model dominate, because these settings call for nonsmooth regularizations. We tackle this problem by including constraints on the spatial variations and value ranges of the inverted velocities, as opposed to adding penalties to the objective, which is more customary in mainstream geophysical inversion. By demonstrating the lack of predictability of edge-preserving inversion when the regularization is in the form of an added penalty term, we advocate the inclusion of constraints instead. Our examples ...
Taking a portfolio perspective on option pricing and hedging, we show that within the standard Bl... more Taking a portfolio perspective on option pricing and hedging, we show that within the standard Black-Scholes-Merton framework large portfolios of options can be hedged without risk in discrete time. The nature of the hedge portfolio in the limit of large portfolio size is substantially different from the standard continuous time delta-hedge. The underlying values of the options in our framework are driven by systematic and idiosyncratic risk factors. Instead of linearly (delta) hedging the total risk of each option separately, the correct hedge portfolio in discrete time eliminates linear (delta) as well as second (gamma) and higher order exposures to the systematic risk factor only. The idiosyncratic risk is not hedged, but diversified. Our result shows that preference free valuation of option portfolios using linear assets only is applicable in discrete time as well. The price paid for this result is that the number of securities in the portfolio has to grow indefinitely. This ties the literature on option pricing and hedging closer together with the APT literature in its focus on systematic risk factors. For portfolios of finite size, the optimal hedge strategy makes a trade-off between hedging linear idiosyncratic and higher order systematic risk.
We construct a class of intersecting brane solutions with horizon geometries of the form adS k × ... more We construct a class of intersecting brane solutions with horizon geometries of the form adS k × S l × S m × E n. We describe how all these solutions are connected through the addition of a wave and/or monopoles. All solutions exhibit supersymmetry enhancement near the horizon. Furthermore we argue that string theory on these spaces is dual to specific superconformal field theories in two dimensions whose symmetry algebra in all cases contains the large N = 4 algebra A γ. Implications for gauged supergravities are also discussed.
We consider a series of duality transformations that leads to a constant shift in the harmonic fu... more We consider a series of duality transformations that leads to a constant shift in the harmonic functions appearing in the description of a configuration of branes. This way, for several intersections of branes, we can relate the original brane configuration which is asymptotically flat to a geometry which is locally isometric to adS k × E l × S m. These results imply that certain branes are dual to supersingleton field theories. We also discuss the implications of our results for supersymmetry enhancement and for supergravity theories in diverse dimensions.
We discuss a recently proposed novel method for waveform inversion: Wavefield Reconstruction Inve... more We discuss a recently proposed novel method for waveform inversion: Wavefield Reconstruction Inversion (WRI). As opposed to conventional FWI-which attempts to minimize the error between observed and predicted data obtained by solving a wave equation-WRI reconstructs a wave-field from the data and extracts a model-update from this wavefield by minimizing the wave-equation residual. The method does not require explicit computation of an adjoint wavefield as all the necessary information is contained in the reconstructed wavefield. We show how the corresponding model updates can be interpreted physically analogously to the conventional imaging-condition-based approach.
By carefully analyzing the relations between operator methods and the discretized and continuum p... more By carefully analyzing the relations between operator methods and the discretized and continuum path integral formulations of quantum-mechanical systems, we have found the correct Feynman rules for one-dimensional path integrals in curved spacetime. Although the prescription how to deal with the products of distributions that appear in the computation of Feynman diagrams in configuration space is surprising, this prescription follows unambiguously from the discretized path integral. We check our results by an explicit two-loop calculation.
Factors that limit the size of the input and output of a neural network include memory requiremen... more Factors that limit the size of the input and output of a neural network include memory requirements for the network states/activations to compute gradients, as well as memory for the convolutional kernels or other weights. The memory restriction is especially limiting for applications where we want to learn how to map volumetric data to the desired output, such as video-to-video. Recently developed fully reversible neural networks enable gradient computations using storage of the network states for a couple of layers only. While this saves a tremendous amount of memory, it is the convolutional kernels that take up most memory if fully reversible networks contain multiple invertible pooling/coarsening layers. Invertible coarsening operators such as the orthogonal wavelet transform cause the number of channels to grow explosively. We address this issue by combining fully reversible networks with layers that contain the convolutional kernels in a compressed form directly. Specifically, we introduce a layer that has a symmetric block-lowrank structure. In spirit, this layer is similar to bottleneck and squeeze-and-expand structures. We contribute symmetry by construction, and a combination of notation and flattening of tensors allows us to interpret these network structures in linear algebraic fashion as a block-low-rank matrix in factorized form and observe various properties. A video segmentation example shows that we can train a network to segment the entire video in one go, which would not be possible, in terms of memory requirements, using nonreversible networks and previously proposed reversible networks.
We propose algorithms and software for computing projections onto the intersection of multiple co... more We propose algorithms and software for computing projections onto the intersection of multiple convex and non-convex constraint sets. The software package, called SetIntersectionProjection, is intended for the regularization of inverse problems in physical parameter estimation and image processing. The primary design criterion is working with multiple sets, which allows us to solve inverse problems with multiple pieces of prior knowledge. Our algorithms outperform the well known Dykstra's algorithm when individual sets are not easy to project onto because we exploit similarities between constraint sets. Other design choices that make the software fast and practical to use, include recently developed automatic selection methods for auxiliary algorithm parameters, fine and coarse grained parallelism, and a multilevel acceleration scheme. We provide implementation details and examples that show how the software can be used to regularize inverse problems. Results show that we benefit from working with all available prior information and are not limited to one or two regularizers because of algorithmic, computational, or hyper-parameter selection issues.
Many works on inverse problems in the imaging sciences consider regularization via one or more pe... more Many works on inverse problems in the imaging sciences consider regularization via one or more penalty functions or constraint sets. When the models/images are not easily described using one or a few penalty functions/constraints, additive model descriptions for regularization lead to better imaging results. These include cartoon-texture decomposition, morphological component analysis, and robust principal component analysis; methods that typically rely on penalty functions. We propose a regularization framework, based on the Minkowski set, that merges the strengths of additive models and constrained formulations. We generalize the Minkowski set, such that the model parameters are the sum of two components, each of which is constrained to an intersection of sets. Furthermore, the sum of the components is also an element of another intersection of sets. These generalizations allow us to include multiple pieces of prior knowledge on each of the components, as well as on the sum of components, which is necessary to ensure physical feasibility of partial-differential-equation based parameters estimation problems. We derive the projection operation onto the generalized Minkowski sets and construct an algorithm based on the alternating direction method of multipliers. We illustrate how we benefit from using more prior knowledge in the form of the generalized Minkowski set using seismic waveform inversion and video background-anomaly separation.
Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vis... more Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vision tasks. However, their application to problems with high dimensional input and output, such as high-resolution image and video segmentation or 3D medical imaging, has been limited by various factors. Primarily, in the training stage, it is necessary to store network activations for back propagation. In these settings, the memory requirements associated with storing activations can exceed what is feasible with current hardware, especially for problems in 3D. Motivated by the propagation of signals over physical networks, that are governed by the hyperbolic Telegraph equation, in this work we introduce a fully conservative hyperbolic network for problems with high dimensional input and output. We introduce a coarsening operation that allows completely reversible CNNs by using a learnable Discrete Wavelet Transform and its inverse to both coarsen and interpolate the network state and change the number of channels. We show that fully reversible networks are able to achieve results comparable to the state of the art in 4D time-lapse hyper spectral image segmentation and full 3D video segmentation, with a much lower memory footprint that is a constant independent of the network depth. We also extend the use of such networks to Variational Auto Encoders with high resolution input and output.
In this paper we make a comparison between wave-equation based inversions based on the adjoint-st... more In this paper we make a comparison between wave-equation based inversions based on the adjoint-state and penalty methods. While the adjoint-state method involves the minimization of a data-misfit and exact solutions of the wave-equation for the current velocity model, the penaltymethod aims to first find a wavefield that jointly fits the data and honours the physics, in a leastsquares sense. Given this reconstructed wavefield, which is a proxy for the true wavefield in the true model, we calculate updates for the velocity model. Aside from being less nonlinear-the acoustic wave equation is linear in the wavefield and model parameters but not in both-the inversion is carried out over a solution space that includes both the model and the wavefield. This larger search space allows the algortihm to circumnavigate local minima, very much in the same way as recently proposed model extentions try to acomplish. We include examples for low frequencies, where we compare full-waveform inversion results for both methods, for good and bad starting models, and for high frequencies where we compare reverse-time migration with linearized imaging based on wavefield-reconstruction inversion. The examples confirm the expected benefits of the proposed method.
The large spatial/frequency scale of hyperspectral and airborne magnetic and gravitational data c... more The large spatial/frequency scale of hyperspectral and airborne magnetic and gravitational data causes memory issues when using convolutional neural networks for (sub-) surface characterization. Recently developed fully reversible networks can mostly avoid memory limitations by virtue of having a low and fixed memory requirement for storing network states, as opposed to the typical linear memory growth with depth. Fully reversible networks enable the training of deep neural networks that take in entire data volumes, and create semantic segmentations in one go. This approach avoids the need to work in small patches or map a data patch to the class of just the central pixel. The cross-entropy loss function requires small modifications to work in conjunction with a fully reversible network and learn from sparsely sampled labels without ever seeing fully labeled ground truth. We show examples from land-use change detection from hyperspectral time-lapse data, and regional aquifer mapping from airborne geophysical and geological data.
Geologic interpretation of large seismic stacked or migrated seismic images can be a time-consumi... more Geologic interpretation of large seismic stacked or migrated seismic images can be a time-consuming task for seismic interpreters. Neural network based semantic segmentation provides fast and automatic interpretations, provided a sufficient number of example interpretations are available. Networks that map from image-to-image emerged recently as powerful tools for automatic segmentation, but standard implementations require fully interpreted examples. Generating training labels for large images manually is time consuming. We introduce a partial loss-function and labeling strategies such that networks can learn from partially interpreted seismic images. This strategy requires only a small number of annotated pixels per seismic image. Tests on seismic images and interpretation information from the Sea of Ireland show that we obtain high-quality predicted interpretations from a small number of large seismic images. The combination of a partial-loss function, a multi-resolution network that explicitly takes small and large-scale geological features into account, and new labeling strategies make neural networks a more practical tool for automatic seismic interpretation.
SEG Technical Program Expanded Abstracts 2019, 2019
Recent years saw a surge of interest in seismic waveform inversion approaches based on quadratic-... more Recent years saw a surge of interest in seismic waveform inversion approaches based on quadratic-penalty or augmented-Lagrangian methods, including Wavefield Reconstruction Inversion. These methods typically need to solve a least-squares sub-problem that contains a discretization of the Helmholtz equation. Memory requirements for direct solvers are often prohibitively large in three dimensions, and this limited the examples in the literature to two dimensions. We present an algorithm that uses iterative Helmholtz solvers as a blackbox to solve the least-squares problem corresponding to 3D grids. This algorithm enables Wavefield Reconstruction Inversion and related formulations, in three dimensions. Our new algorithm also includes a root-finding method to convert a penalty into a constraint on the data-misfit without additional computational cost, by reusing precomputed quantities. Numerical experiments show that the cost of parallel communication and other computations are small compared to the main cost of solving one Helmholtz problem per source and one per receiver.
SEG Technical Program Expanded Abstracts 2019, 2019
Geological interpretation of seismic images is a visual task that can be automated by training ne... more Geological interpretation of seismic images is a visual task that can be automated by training neural networks. While neural networks have shown to be effective at various interpretation tasks, a fundamental challenge is the lack of labeled data points in the subsurface. For example, the interpolation and extrapolation of well-based lithology using seismic images relies on a small number of known labels. Besides well-known data augmentation techniques, as well as regularization of the network output, we propose and test another approach to deal with the lack of labels. Non learning-based horizon trackers work very well in the shallow subsurface where seismic images are of higher quality and the geological units are roughly layered. We test if these segmented and shallow units can help train neural networks to predict deeper geological units that are not layered and flat. We show that knowledge of shallow geological units helps to predict deeper units when there are only a few labels for training using a dataset from the Sea of Ireland. We employ U-net based multi-resolution networks, and we show that these networks can be described using matrix-vector product notation in a similar fashion as standard geophysical inverse problems.
There has been a surge of interest in neural networks for the interpretation of seismic images ov... more There has been a surge of interest in neural networks for the interpretation of seismic images over the last few years. Network-based learning methods can provide fast and accurate automatic interpretation, provided that there are many training labels. We provide an introduction to the field for geophysicists who are familiar with the framework of forward modeling and inversion. We explain the similarities and differences between deep networks and other geophysical inverse problems and show their utility in solving problems such as lithology interpolation between wells, horizon tracking, and segmentation of seismic images. The benefits of our approach are demonstrated on field data from the Sea of Ireland and the North Sea.
Detecting a specific horizon in seismic images is a valuable tool for geologic interpretation. Be... more Detecting a specific horizon in seismic images is a valuable tool for geologic interpretation. Because hand picking the locations of the horizon is a time-consuming process, automated computational methods were developed starting three decades ago. Until now, most networks have been trained on data that were created by cutting larger seismic images into many small patches. This limits the networks ability to learn from large-scale geologic structures. Moreover, currently available networks and training strategies require label patches that have full and continuous horizon picks (annotations), which are also time-consuming to generate. We have developed a projected loss function that enables training on labels with just a few annotated pixels and has no issue with the other unknown label pixels. We use this loss function for training convolutional networks with a multiresolution structure, including variants of the U-net. Our networks learn from a small number of large seismic images...
Full-waveform inversion is challenging in complex geologic areas. Even when provided with an accu... more Full-waveform inversion is challenging in complex geologic areas. Even when provided with an accurate starting model, the inversion algorithms often struggle to update the velocity model. Compared with other areas in applied geophysics, including prior information in full-waveform inversion is still in its relative infancy. In part, this is due to the fact that it is difficult to incorporate prior information that relates to geologic settings where strong discontinuities in the velocity model dominate, because these settings call for nonsmooth regularizations. We tackle this problem by including constraints on the spatial variations and value ranges of the inverted velocities, as opposed to adding penalties to the objective, which is more customary in mainstream geophysical inversion. By demonstrating the lack of predictability of edge-preserving inversion when the regularization is in the form of an added penalty term, we advocate the inclusion of constraints instead. Our examples ...
Taking a portfolio perspective on option pricing and hedging, we show that within the standard Bl... more Taking a portfolio perspective on option pricing and hedging, we show that within the standard Black-Scholes-Merton framework large portfolios of options can be hedged without risk in discrete time. The nature of the hedge portfolio in the limit of large portfolio size is substantially different from the standard continuous time delta-hedge. The underlying values of the options in our framework are driven by systematic and idiosyncratic risk factors. Instead of linearly (delta) hedging the total risk of each option separately, the correct hedge portfolio in discrete time eliminates linear (delta) as well as second (gamma) and higher order exposures to the systematic risk factor only. The idiosyncratic risk is not hedged, but diversified. Our result shows that preference free valuation of option portfolios using linear assets only is applicable in discrete time as well. The price paid for this result is that the number of securities in the portfolio has to grow indefinitely. This ties the literature on option pricing and hedging closer together with the APT literature in its focus on systematic risk factors. For portfolios of finite size, the optimal hedge strategy makes a trade-off between hedging linear idiosyncratic and higher order systematic risk.
We construct a class of intersecting brane solutions with horizon geometries of the form adS k × ... more We construct a class of intersecting brane solutions with horizon geometries of the form adS k × S l × S m × E n. We describe how all these solutions are connected through the addition of a wave and/or monopoles. All solutions exhibit supersymmetry enhancement near the horizon. Furthermore we argue that string theory on these spaces is dual to specific superconformal field theories in two dimensions whose symmetry algebra in all cases contains the large N = 4 algebra A γ. Implications for gauged supergravities are also discussed.
We consider a series of duality transformations that leads to a constant shift in the harmonic fu... more We consider a series of duality transformations that leads to a constant shift in the harmonic functions appearing in the description of a configuration of branes. This way, for several intersections of branes, we can relate the original brane configuration which is asymptotically flat to a geometry which is locally isometric to adS k × E l × S m. These results imply that certain branes are dual to supersingleton field theories. We also discuss the implications of our results for supersymmetry enhancement and for supergravity theories in diverse dimensions.
We discuss a recently proposed novel method for waveform inversion: Wavefield Reconstruction Inve... more We discuss a recently proposed novel method for waveform inversion: Wavefield Reconstruction Inversion (WRI). As opposed to conventional FWI-which attempts to minimize the error between observed and predicted data obtained by solving a wave equation-WRI reconstructs a wave-field from the data and extracts a model-update from this wavefield by minimizing the wave-equation residual. The method does not require explicit computation of an adjoint wavefield as all the necessary information is contained in the reconstructed wavefield. We show how the corresponding model updates can be interpreted physically analogously to the conventional imaging-condition-based approach.
By carefully analyzing the relations between operator methods and the discretized and continuum p... more By carefully analyzing the relations between operator methods and the discretized and continuum path integral formulations of quantum-mechanical systems, we have found the correct Feynman rules for one-dimensional path integrals in curved spacetime. Although the prescription how to deal with the products of distributions that appear in the computation of Feynman diagrams in configuration space is surprising, this prescription follows unambiguously from the discretized path integral. We check our results by an explicit two-loop calculation.
Uploads
Papers by Bas Peters