Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Welcome to Byte Size Arxiv

Papers made digestable

2022-11-03

Along Similar Lines: Local Obstacle Avoidance for Long-term Autonomous Path Following

Our architecture simplifies the obstacle-perception problem to that of place-dependent change detection. While we use the method with VT&R, it can be generalized to suit arbitrary path-following applications. Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves long-term autonomous path-following using topometric mapping and localization from a single rich sensor stream. In this paper, we improve the capabilities of a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in changing environments. Our architecture simplifies the obstacle-perception problem to that of place-dependent change detection. We then extend the behaviour of generic sample-based motion planners to better suit the teach-and-repeat problem structure by introducing a new edge-cost metric paired with a curvilinear planning space. The resulting planner generates naturally smooth paths that avoid local obstacles while minimizing lateral path deviation to best exploit prior terrain knowledge. While we use the method with VT&R, it can be generalized to suit arbitrary path-following applications. Experimental results from online run-time analysis, unit testing, and qualitative experiments on a differential drive robot show the promise of the technique for reliable long-term autonomous operation in complex unstructured environments.

Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.

2022-11-03

Seamless Phase 2-3 Design: A Useful Strategy to Reduce the Sample Size for Dose Optimization

The statistical and design considerations that pertain to dose optimization are discussed. The sample size savings range from 16.6% to 27.3%, depending on the design and scenario, with a mean savings of 22.1%. The traditional more-is-better dose selection paradigm, developed based on cytotoxic chemotherapeutics, is often problematic When applied to the development of novel molecularly targeted agents (e.g., kinase inhibitors, monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug Administration (FDA) initiated Project Optimus to reform the dose optimization and dose selection paradigm in oncology drug development and call for more attention to benefit-risk consideration. We systematically investigated the operating characteristics of the seamless phase 2-3 design as a strategy for dose optimization, where in stage 1 (corresponding to phase 2) patients are randomized to multiple doses, with or without a control; and in stage 2 (corresponding to phase 3) the efficacy of the selected optimal dose is evaluated with a randomized concurrent control or historical control. Depending on whether the concurrent control is included and the type of endpoints used in stages 1 and 2, we describe four types of seamless phase 2-3 dose-optimization designs, which are suitable for different clinical settings. The statistical and design considerations that pertain to dose optimization are discussed. Simulation shows that dose optimization phase 2-3 designs are able to control the familywise type I error rates and yield appropriate statistical power with substantially smaller sample size than the conventional approach. The sample size savings range from 16.6% to 27.3%, depending on the design and scenario, with a mean savings of 22.1%. Due to the interim dose selection, the phase 2-3 dose-optimization design is logistically and operationally more challenging, and should be carefully planned and implemented to ensure trial integrity.

Authors: Liyun Jiang, Ying Yuan.

2022-11-03

Fast and robust Bayesian Inference using Gaussian Processes with GPry

We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. We present the GPry algorithm for fast Bayesian inference of general (non-Gaussian) posteriors with a moderate number of parameters. GPry does not need any pre-training, special hardware such as GPUs, and is intended as a drop-in replacement for traditional Monte Carlo methods for Bayesian inference. Our algorithm is based on generating a Gaussian Process surrogate model of the log-posterior, aided by a Support Vector Machine classifier that excludes extreme or non-finite values. An active learning scheme allows us to reduce the number of required posterior evaluations by two orders of magnitude compared to traditional Monte Carlo inference. Our algorithm allows for parallel evaluations of the posterior at optimal locations, further reducing wall-clock times. We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. GPry outperforms traditional Monte Carlo methods when the evaluation time of the likelihood (or the calculation of theoretical observables) is of the order of seconds; for evaluation times of over a minute it can perform inference in days that would take months using traditional methods. GPry is distributed as an open source Python package (pip install gpry) and can also be found at https://github.com/jonaselgammal/GPry.

Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.

2022-11-03

Competitive Kill-and-Restart Strategies for Non-Clairvoyant Scheduling

We consider the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered for the total completion time objective in the non-clairvoyant model. This implies a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic algorithm and of $\approx 3.032$ for the randomized version. We consider the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. While no non-preemptive algorithm is constant competitive, Motwani, Phillips, and Torng (SODA '93) proved that the simple preemptive round robin procedure is $2$-competitive and that no better competitive ratio is possible, initiating a long line of research focused on preemptive algorithms for generalized variants of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS '91) introduced kill-and-restart schedules, where running jobs may be killed and restarted from scratch later, and analyzed then for the makespan objective. However, to the best of our knowledge, this concept has never been considered for the total completion time objective in the non-clairvoyant model. We contribute to both models: First we give for any $b > 1$ a tight analysis for the natural $b$-scaling kill-and-restart strategy for scheduling jobs without release dates, as well as for a randomized variant of it. This implies a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic algorithm and of $\approx 3.032$ for the randomized version. Second, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is $2$-competitive for jobs released in an online fashion over time, matching the lower bound by Motwani et al. Using this result as well as the competitiveness of round robin for multiple machines, we prove performance guarantees of adaptions of the $b$-scaling algorithm to online release dates and unweighted jobs on identical parallel machines.

Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.

2022-11-03

Could Giant Pretrained Image Models Extract Universal Representations?

Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models. Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models.

Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.

2022-11-03

A Consistent Estimator for Confounding Strength

Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding. A common assumption is the independence of causal mechanisms, which relies on concentration phenomena in high dimensions. We then use tools from random matrix theory to derive an adapted, consistent estimator. Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding. Confounding strength measures this mismatch, but estimating it requires itself additional assumptions. A common assumption is the independence of causal mechanisms, which relies on concentration phenomena in high dimensions. While high dimensions enable the estimation of confounding strength, they also necessitate adapted estimators. In this paper, we derive the asymptotic behavior of the confounding strength estimator by Janzing and Sch\"olkopf (2018) and show that it is generally not consistent. We then use tools from random matrix theory to derive an adapted, consistent estimator.

Authors: Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike von Luxburg.

2022-11-03

Martian Ionosphere Electron Density Prediction Using Bagged Trees

The availability of Martian atmospheric data provided by several Martian missions broadened the opportunity to investigate and study the conditions of the Martian ionosphere. The bagged regression trees method performed best out of all the evaluated methods.

The availability of Martian atmospheric data provided by several Martian missions broadened the opportunity to investigate and study the conditions of the Martian ionosphere. As such, ionospheric models play a crucial part in improving our understanding of ionospheric behavior in response to different spatial, temporal, and space weather conditions. This work represents an initial attempt to construct an electron density prediction model of the Martian ionosphere using machine learning. The model targets the ionosphere at solar zenith ranging from 70 to 90 degrees, and as such only utilizes observations from the Mars Global Surveyor mission. The performance of different machine learning methods was compared in terms of root mean square error, coefficient of determination, and mean absolute error. The bagged regression trees method performed best out of all the evaluated methods. Furthermore, the optimized bagged regression trees model outperformed other Martian ionosphere models from the literature (MIRI and NeMars) in finding the peak electron density value, and the peak density height in terms of root-mean-square error and mean absolute error.

Authors: Abdollah Masoud Darya, Noora Alameri, Muhammad Mubasshir Shaikh, Ilias Fernini.

2022-11-03

Augmenting photometric redshift estimates using spectroscopic nearest neighbours

As a consequence of galaxy clustering, close galaxies observed on the plane of the sky should be spatially correlated with a probability inversely proportional to their angular separation. Depending on the depth of the survey, however, such angular correlation is reduced by chance projections. In this work, we implement a deep learning model to distinguish between apparent and real angular neighbours by solving a classification task. We adopt a graph neural network architecture to tie together the photometry, the spectroscopy and the spatial information between neighbouring galaxies. We train and validate the algorithm on the data of the VIPERS galaxy survey, for which SED-fitting based photometric redshifts are also available. When objects for which no physical companion can be identified are excluded, all photometric redshifts' quality metrics improve significantly, confirming that their estimates were of lower quality. As a consequence of galaxy clustering, close galaxies observed on the plane of the sky should be spatially correlated with a probability inversely proportional to their angular separation. In principle, this information can be used to improve photometric redshift estimates when spectroscopic redshifts are available for some of the neighbouring objects. Depending on the depth of the survey, however, such angular correlation is reduced by chance projections. In this work, we implement a deep learning model to distinguish between apparent and real angular neighbours by solving a classification task. We adopt a graph neural network architecture to tie together the photometry, the spectroscopy and the spatial information between neighbouring galaxies. We train and validate the algorithm on the data of the VIPERS galaxy survey, for which SED-fitting based photometric redshifts are also available. The model yields a confidence level for a pair of galaxies to be real angular neighbours, enabling us to disentangle chance superpositions in a probabilistic way. When objects for which no physical companion can be identified are excluded, all photometric redshifts' quality metrics improve significantly, confirming that their estimates were of lower quality. For our typical test configuration, the algorithm identifies a subset containing ~75% of high-quality photometric redshifts, for which the dispersion is reduced by as much as 50% (from 0.08 to 0.04), while the fraction of outliers reduces from 3% to 0.8%. Moreover, we show that the spectroscopic redshift of the angular neighbour with the highest detection probability provides an excellent estimate of the redshift of the target galaxy, comparable or even better than the corresponding template fitting estimate.

Authors: F. Tosone, M. S. Cagliari, L. Guzzo, B. R. Granett, A. Crespi.

2022-11-03

Formally Self-Adjoint Hamiltonian for the Hilbert-Pólya Conjecture

We consider a two-dimensional Hamiltonian which couples the Berry-Keating Hamiltonian to the number operator on the half-line via a unitary transformation. The Riemann zeta function appears at the boundary of the resulting confined wave function and vanishes as a result of the imposed boundary condition.

We construct a formally self-adjoint Hamiltonian whose eigenvalues correspond to the nontrivial zeros of the Riemann zeta function. We consider a two-dimensional Hamiltonian which couples the Berry-Keating Hamiltonian to the number operator on the half-line via a unitary transformation. We demonstrate that the unitary operator, which is composed of squeeze (dilation) operators and an exponential of the number operator, confines the eigenfunction of the Hamiltonian to one dimension as the squeezing parameter tends towards infinity. The Riemann zeta function appears at the boundary of the resulting confined wave function and vanishes as a result of the imposed boundary condition. If the formal argument presented here can be made more rigorous, particularly if it can be shown rigorously that the Hamiltonian remains self-adjoint under the imposed boundary condition, then our approach has the potential to imply that the Riemann hypothesis is true.

Authors: Enderalp Yakaboylu.

2022-11-03

Disentangling complex current pathways in a metallic Ru/Co bilayer nanostructure using THz spectroscopy

However, the complex current pathways within such nanostructures are difficult to disentangle using conventional experimental methods. Here, we measure the conductivity of a technologically relevant Ru/Co bilayer nanostructure in a contact-free fashion using THz time-domain spectroscopy. Many modern spintronic technologies, such as spin valves, spin Hall applications, and spintronic THz emitters, are based on electrons crossing buried internal interfaces within metallic nanostructures. However, the complex current pathways within such nanostructures are difficult to disentangle using conventional experimental methods. Here, we measure the conductivity of a technologically relevant Ru/Co bilayer nanostructure in a contact-free fashion using THz time-domain spectroscopy. By applying an effective resistor network to the data, we resolve the complex current pathways within the nanostructure and determine the degree of electronic transparency of the internal interface between the Ru and Co nanolayers.

Authors: Nicolas S. Beermann, Savio Fabretti, Karsten Rott, Hassan A. Hafez, Günter Reiss, Dmitry Turchinovich.

2022-11-03

Demographics of the M-star Multiple Population in the Orion Nebula Cluster

Our study utilizes archival Hubble Space Telescope data obtained with the Advanced Camera for Surveys using multiple filters (GO-10246). In this study, we investigate the companion population of the ONC with a double point-spread function (PSF) fitting algorithm sensitive to separations larger than 10au (0.025") using empirical PSF models. We find the companion frequency in the ONC is consistent with the Galactic field population, likely from high transient stellar density states, and a probability of 0.002 that it is consistent with that of Taurus. We also find the ONC mass ratio distribution is consistent with the field and Taurus, potentially indicative of its primordial nature, a direct outcome of the star formation process.

We present updated results constraining multiplicity demographics for the stellar population of the Orion Nebula Cluster (ONC, a high-mass, high-density star-forming region), across primary masses 0.08-0.7M$_{\odot}$. Our study utilizes archival Hubble Space Telescope data obtained with the Advanced Camera for Surveys using multiple filters (GO-10246). Previous multiplicity surveys in low-mass, low-density associations like Taurus identify an excess of companions to low-mass stars roughly twice that of the Galactic field and find the mass ratio distribution consistent with the field. Previously, we found the companion frequency to low-mass stars in the ONC is consistent with the Galactic field over mass ratios=0.6-1.0 and projected separations=30-160au, without placing constraints on the mass ratio distribution. In this study, we investigate the companion population of the ONC with a double point-spread function (PSF) fitting algorithm sensitive to separations larger than 10au (0.025") using empirical PSF models. We identified 44 companions (14 new), and with a Bayesian analysis, estimate the companion frequency to low-mass stars in the ONC =0.13$^{+0.05}_{-0.03}$ and the power law fit index to the mass ratio distribution =2.08$^{+1.03}_{-0.85}$ over all mass ratios and projected separations of 10-200au. We find the companion frequency in the ONC is consistent with the Galactic field population, likely from high transient stellar density states, and a probability of 0.002 that it is consistent with that of Taurus. We also find the ONC mass ratio distribution is consistent with the field and Taurus, potentially indicative of its primordial nature, a direct outcome of the star formation process.

Authors: Matthew De Furio, Christopher Liu, Michael R. Meyer, Megan Reiter, Adam Kraus, Trent Dupuy, John Monnier.

2022-11-03

The $X(3872)$, the $X(4014)$, and their bottom partners at finite temperature

The properties of the $X(3872)$ and its spin partner, the $X(4014)$, are studied both in vacuum and at finite temperature. By incorporating the thermal spectral functions of open charm mesons, the calculation is extended to finite temperature. By applying heavy-quark flavor symmetry, the properties of their bottom counterparts in the axial-vector and tensor channels are also predicted. All the dynamically generated states show a decreasing mass and acquire an increasing decay width with temperature, following the trend observed in their meson constituents. The properties of the $X(3872)$ and its spin partner, the $X(4014)$, are studied both in vacuum and at finite temperature. Using an effective hadron theory based on the hidden-gauge Lagrangian, the $X(3872)$ is dynamically generated from the $s$-wave rescattering of a pair of pseudoscalar and vector charm mesons. By incorporating the thermal spectral functions of open charm mesons, the calculation is extended to finite temperature. Similarly, the properties of the $X(4014)$ are obtained out of the scattering of charm vector mesons. By applying heavy-quark flavor symmetry, the properties of their bottom counterparts in the axial-vector and tensor channels are also predicted. All the dynamically generated states show a decreasing mass and acquire an increasing decay width with temperature, following the trend observed in their meson constituents. These results are relevant in relativistic heavy-ion collisions at high energies, in analyses of the collective medium formed after hadronization or in femtoscopic studies, and can be tested in lattice-QCD calculations exploring the melting of heavy mesons at finite temperature.

Authors: Gloria Montaña, Angels Ramos, Laura Tolos, Juan M. Torres-Rincon.

2022-11-03

Quantum kinetic theory of nonlinear thermal current

We investigate the second-order nonlinear electronic thermal transport induced by temperature gradient. We develop the quantum kinetic theory framework to describe thermal transport in presence of a temperature gradient. We employ the developed theory to study the thermal response in tilted massive Dirac systems. We show that besides the different scattering time dependence, the various current contributions have distinct temperature dependence in the low temperature limit.

We investigate the second-order nonlinear electronic thermal transport induced by temperature gradient. We develop the quantum kinetic theory framework to describe thermal transport in presence of a temperature gradient. Using this, we predict an intrinsic scattering time independent nonlinear thermal current in addition to the known extrinsic nonlinear Drude and Berry curvature dipole contributions. We show that the intrinsic thermal current is determined by the band geometric quantities and is non-zero only in systems where both the space inversion and time-reversal symmetries are broken. We employ the developed theory to study the thermal response in tilted massive Dirac systems. We show that besides the different scattering time dependence, the various current contributions have distinct temperature dependence in the low temperature limit. Our systematic and comprehensive theory for nonlinear thermal transport paves the way for future theoretical and experimental studies on intrinsic thermal responses.

Authors: Harsh Varshney, Kamal Das, Pankaj Bhalla, Amit Agarwal.

2022-11-03

Evidence of Periodic Variability in Gamma-ray Emitting Blazars with Fermi-LAT

It is well known that blazars can show variability on a wide range of time scales. Specifically, these sources are PKS 0454$-$234, S5 0716+714, OJ 014, PG 1553+113, and PKS 2155$-$304. It is well known that blazars can show variability on a wide range of time scales. This behavior can include periodicity in their $\gamma$-ray emission, whose clear detection remains an ongoing challenge, partly due to the inherent stochasticity of the processes involved and also the lack of adequately-well sampled light curves. We report on a systematic search for periodicity in a sample of 24 blazars, using twelve well-established methods applied to Fermi-LAT data. The sample comprises the most promising candidates selected from a previous study, extending the light curves from nine to twelve years and broadening the energy range analyzed from $>$1 GeV to $>$0.1 GeV. These improvements allow us to build a sample of blazars that display a period detected with global significance $\gtrsim3\,\sigma$. Specifically, these sources are PKS 0454$-$234, S5 0716+714, OJ 014, PG 1553+113, and PKS 2155$-$304. Periodic $\gamma$-ray emission may be an indication of modulation of the jet power, particle composition, or geometry but most likely originates in the accretion disk, possibly indicating the presence of a supermassive black hole binary system.

Authors: P. Peñil, M. Ajello, S. Buson, A. Domínguez, J. R. Westernacher-Schneider, J. Zrake.

2022-11-03

A Walk Through Some Newer Parts of Additive Combinatorics

In this survey paper we discuss some recent results and related open questions in additive combinatorics, in particular, questions about sumsets in finite abelian groups.

In this survey paper we discuss some recent results and related open questions in additive combinatorics, in particular, questions about sumsets in finite abelian groups.

Authors: Bela Bajnok.