Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Insert related papers here

Welcome to Byte Size Arxiv

Papers made digestable

2022-11-03

Along Similar Lines: Local Obstacle Avoidance for Long-term Autonomous Path Following

Our architecture simplifies the obstacle-perception problem to that of place-dependent change detection. While we use the method with VT&R, it can be generalized to suit arbitrary path-following applications. Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves long-term autonomous path-following using topometric mapping and localization from a single rich sensor stream. In this paper, we improve the capabilities of a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in changing environments. Our architecture simplifies the obstacle-perception problem to that of place-dependent change detection. We then extend the behaviour of generic sample-based motion planners to better suit the teach-and-repeat problem structure by introducing a new edge-cost metric paired with a curvilinear planning space. The resulting planner generates naturally smooth paths that avoid local obstacles while minimizing lateral path deviation to best exploit prior terrain knowledge. While we use the method with VT&R, it can be generalized to suit arbitrary path-following applications. Experimental results from online run-time analysis, unit testing, and qualitative experiments on a differential drive robot show the promise of the technique for reliable long-term autonomous operation in complex unstructured environments.

Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.

2022-11-03

Seamless Phase 2-3 Design: A Useful Strategy to Reduce the Sample Size for Dose Optimization

The statistical and design considerations that pertain to dose optimization are discussed. The sample size savings range from 16.6% to 27.3%, depending on the design and scenario, with a mean savings of 22.1%. The traditional more-is-better dose selection paradigm, developed based on cytotoxic chemotherapeutics, is often problematic When applied to the development of novel molecularly targeted agents (e.g., kinase inhibitors, monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug Administration (FDA) initiated Project Optimus to reform the dose optimization and dose selection paradigm in oncology drug development and call for more attention to benefit-risk consideration. We systematically investigated the operating characteristics of the seamless phase 2-3 design as a strategy for dose optimization, where in stage 1 (corresponding to phase 2) patients are randomized to multiple doses, with or without a control; and in stage 2 (corresponding to phase 3) the efficacy of the selected optimal dose is evaluated with a randomized concurrent control or historical control. Depending on whether the concurrent control is included and the type of endpoints used in stages 1 and 2, we describe four types of seamless phase 2-3 dose-optimization designs, which are suitable for different clinical settings. The statistical and design considerations that pertain to dose optimization are discussed. Simulation shows that dose optimization phase 2-3 designs are able to control the familywise type I error rates and yield appropriate statistical power with substantially smaller sample size than the conventional approach. The sample size savings range from 16.6% to 27.3%, depending on the design and scenario, with a mean savings of 22.1%. Due to the interim dose selection, the phase 2-3 dose-optimization design is logistically and operationally more challenging, and should be carefully planned and implemented to ensure trial integrity.

Authors: Liyun Jiang, Ying Yuan.

2022-11-03

Fast and robust Bayesian Inference using Gaussian Processes with GPry

We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. We present the GPry algorithm for fast Bayesian inference of general (non-Gaussian) posteriors with a moderate number of parameters. GPry does not need any pre-training, special hardware such as GPUs, and is intended as a drop-in replacement for traditional Monte Carlo methods for Bayesian inference. Our algorithm is based on generating a Gaussian Process surrogate model of the log-posterior, aided by a Support Vector Machine classifier that excludes extreme or non-finite values. An active learning scheme allows us to reduce the number of required posterior evaluations by two orders of magnitude compared to traditional Monte Carlo inference. Our algorithm allows for parallel evaluations of the posterior at optimal locations, further reducing wall-clock times. We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. GPry outperforms traditional Monte Carlo methods when the evaluation time of the likelihood (or the calculation of theoretical observables) is of the order of seconds; for evaluation times of over a minute it can perform inference in days that would take months using traditional methods. GPry is distributed as an open source Python package (pip install gpry) and can also be found at https://github.com/jonaselgammal/GPry.

Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.

2022-11-03

Competitive Kill-and-Restart Strategies for Non-Clairvoyant Scheduling

We consider the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered for the total completion time objective in the non-clairvoyant model. This implies a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic algorithm and of $\approx 3.032$ for the randomized version. We consider the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. While no non-preemptive algorithm is constant competitive, Motwani, Phillips, and Torng (SODA '93) proved that the simple preemptive round robin procedure is $2$-competitive and that no better competitive ratio is possible, initiating a long line of research focused on preemptive algorithms for generalized variants of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS '91) introduced kill-and-restart schedules, where running jobs may be killed and restarted from scratch later, and analyzed then for the makespan objective. However, to the best of our knowledge, this concept has never been considered for the total completion time objective in the non-clairvoyant model. We contribute to both models: First we give for any $b > 1$ a tight analysis for the natural $b$-scaling kill-and-restart strategy for scheduling jobs without release dates, as well as for a randomized variant of it. This implies a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic algorithm and of $\approx 3.032$ for the randomized version. Second, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is $2$-competitive for jobs released in an online fashion over time, matching the lower bound by Motwani et al. Using this result as well as the competitiveness of round robin for multiple machines, we prove performance guarantees of adaptions of the $b$-scaling algorithm to online release dates and unweighted jobs on identical parallel machines.

Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.

2022-11-03

Could Giant Pretrained Image Models Extract Universal Representations?

Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models. Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models.

Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.

2022-11-03

Average Mixing in Quantum Walks of Reversible Markov Chains

The Szegedy quantum walk is a discrete time quantum walk model which defines a quantum analogue of any Markov chain. We prove a formula for our mixing matrix in terms of the spectral decomposition of the Markov chain and show a relationship with the mixing matrix of a continuous quantum walk on the chain. In particular, we prove that average uniform mixing in the continuous walk implies average uniform mixing in the Szegedy walk. The Szegedy quantum walk is a discrete time quantum walk model which defines a quantum analogue of any Markov chain. The long-term behavior of the quantum walk can be encoded in a matrix called the average mixing matrix, whose columns give the limiting probability distribution of the walk given an initial state. We define a version of the average mixing matrix of the Szegedy quantum walk which allows us to more readily compare the limiting behavior to that of the chain it quantizes. We prove a formula for our mixing matrix in terms of the spectral decomposition of the Markov chain and show a relationship with the mixing matrix of a continuous quantum walk on the chain. In particular, we prove that average uniform mixing in the continuous walk implies average uniform mixing in the Szegedy walk. We conclude by giving examples of Markov chains of arbitrarily large size which admit average uniform mixing in both the continuous and Szegedy quantum walk.

Authors: Julien Sorci.

2022-11-03

White dwarf binaries suggest a common envelope efficiency $α\sim 1/3$

\sim \! 0.2-0.4$ is consistent with each system we model.

Common envelope (CE) evolution, which is crucial in creating short period binaries and associated astrophysical events, can be constrained by reverse modeling of such binaries' formation histories. Through analysis of a sample of well-constrained white dwarf (WD) binaries with low-mass primaries (7 eclipsing double WDs, 2 non-eclipsing double WDs, 1 WD-brown dwarf), we estimate the CE energy efficiency $\alpha_{\rm{CE}}$ needed to unbind the hydrogen envelope. We use grids of He- and CO-core WD models to determine the masses and cooling ages that match each primary WD's radius and temperature. Assuming gravitational wave-driven orbital decay, we then calculate the associated ranges in post-CE orbital period. By mapping WD models to a grid of red giant progenitor stars, we determine the total envelope binding energies and possible orbital periods at the point CE evolution is initiated, thereby constraining $\alpha_{\rm CE}$. Assuming He-core WDs with progenitors of 0.9 - 2.0 $M_\odot$, we find $\alpha_{\rm CE} \! \sim \! 0.2-0.4$ is consistent with each system we model. Significantly higher values of $\alpha_{\rm{CE}}$ are required for higher mass progenitors and for CO-core WDs, so these scenarios are deemed unlikely. Our values are mostly consistent with previous studies of post-CE WD binaries, and they suggest a nearly constant and low envelope ejection efficiency for CE events that produce He-core WDs.

Authors: Peter Scherbak, Jim Fuller.

2022-11-03

A Retrospective Study on the Investigation of Potential Clinical Benefits of Online Adaptive Proton Therapy for Head and Neck Cancer

Online adaptive proton therapy (APT) is an ideal solution theoretically, which however is challenging to proton clinics. Although multiple groups have been endeavoring to develop online APT technology, there is a concern in the radiotherapy community about the necessity of online APT because of its unknown impact on treatment outcomes. The cumulative dose of simulated online APT courses was compared to actual offline APT courses and the initially designed treatment plan dose. Future studies are needed to help identify the patients with large potential benefits prior to treatment to conserve scarce clinical resources. Online adaptive proton therapy (APT) is an ideal solution theoretically, which however is challenging to proton clinics. Although multiple groups have been endeavoring to develop online APT technology, there is a concern in the radiotherapy community about the necessity of online APT because of its unknown impact on treatment outcomes. Hence, we have performed a retrospective study to investigate the potential clinical effects of online APT for HN cancer patients in relative to the current offline APT via simulations. To mimic an online APT treatment course, we have recalculated and evaluated the actual dose of the current treatment plan on patient actual treatment anatomy captured by cone beam CT for each fraction. The cumulative dose of simulated online APT courses was compared to actual offline APT courses and the initially designed treatment plan dose. For patients 1 and 2, the simulated online ART course maintained a relatively higher CTV dose coverages than the offline ART course, particularly for CTV-Low, which led to an improvement of 2.66% and 4.52% in TCP of CTV-Low. For patients 3 and 4, with clinically comparable CTV dose coverages, the simulated online ART course achieved better OAR sparing than the offline ART course. The mean doses of right parotid and oral cavity were decreased from 29.52 Gy relative biological effectiveness (RBE) and 41.89 Gy RBE to 22.16 Gy RBE and 34.61 Gy RBE for patient 3, leading to a reduce of 1.67% and 3.40% in NTCP for the two organs. Compared to the current clinical practice, the retrospective study indicated that online APT tended to spare more normal tissues by achieving the clinical goal with merely half of the positional uncertainty margin. Future studies are needed to help identify the patients with large potential benefits prior to treatment to conserve scarce clinical resources.

Authors: Chih-Wei Chang, Duncan Bohannon, Zhen Tian, Yinan Wang, Mark W. Mcdonald, David S. Yu, Tian Liu, Jun Zhou, Xiaofeng Yang.

2022-11-03

Convergence of the logarithm of the characteristic polynomial of unitary Brownian motion to the Gaussian free field

This is the natural dynamical analogue of the result for a fixed time by Hughes, Keating and O'Connell [1]. In the course of this research we also proved a Wick-type identity, which we include in this paper, as it might be of independent interest.

We prove that the real and imaginary parts of the logarithm of the characteristic polynomial of unitary Brownian motion converge to Gaussian free fields on the cylinder, as the matrix dimension goes to infinity. This is the natural dynamical analogue of the result for a fixed time by Hughes, Keating and O'Connell [1]. Further it complements a result by Spohn [2] on linear statistics of unitary Brownian motion, and a recent result by Bourgade and Falconet [3] connecting the characteristic polynomial of unitary Brownian motion to a Gaussian multiplicative chaos measure. In the course of this research we also proved a Wick-type identity, which we include in this paper, as it might be of independent interest.

Authors: Johannes Forkel, Isao Sauzedde.

2022-11-03

Optically Induced Picosecond Lattice Compression in the Dielectric Component of a Strongly Coupled Ferroelectric/Dielectric Superlattice

The depolarization-field-screening-driven expansion is separate from a photoacoustic pulse launched from the bottom electrode on which the superlattice was epitaxially grown. The magnitude of expansion in BaTiO3 layers is larger than the contraction in CaTiO3. The depolarization-field-screening-driven polarization reduction in the CaTiO3 layers points to a new direction for the manipulation of polarization in the component layers of a strongly coupled ferroelectric/dielectric superlattice. Above-bandgap femtosecond optical excitation of a ferroelectric/dielectric BaTiO3/CaTiO3 superlattice leads to structural responses that are a consequence of the screening of the strong electrostatic coupling between the component layers. Time-resolved x-ray free-electron laser diffraction shows that the structural response to optical excitation includes a net lattice expansion of the superlattice consistent with depolarization-field screening driven by the photoexcited charge carriers. The depolarization-field-screening-driven expansion is separate from a photoacoustic pulse launched from the bottom electrode on which the superlattice was epitaxially grown. The distribution of diffracted intensity of superlattice x-ray reflections indicates that the depolarization-field-screening-induced strain includes a photoinduced expansion in the ferroelectric BaTiO3 and a contraction in CaTiO3. The magnitude of expansion in BaTiO3 layers is larger than the contraction in CaTiO3. The difference in the magnitude of depolarization-field-screening-driven strain in the BaTiO3 and CaTiO3 components can arise from the contribution of the oxygen octahedral rotation patterns at the BaTiO3/CaTiO3 interfaces to the polarization of CaTiO3. The depolarization-field-screening-driven polarization reduction in the CaTiO3 layers points to a new direction for the manipulation of polarization in the component layers of a strongly coupled ferroelectric/dielectric superlattice.

Authors: Deepankar Sri Gyan, Hyeon Jun Lee, Youngjun Ahn, Jerome Carnis, Tae Yeon Kim, Sanjith Unithrattil, Jun Young Lee, Sae Hwan Chun, Sunam Kim, Intae Eom, Minseok Kim, Sang-Youn Park, Kyung Sook Kim, Ho Nyung Lee, Ji Young Jo, Paul G. Evans.

2022-11-03

To spike or not to spike: the whims of the Wonham filter in the strong noise regime

We are interested in the weak noise regime for the observation equation. In particular, we demonstrate that there is a sharp phase transition between a spiking regime and a regime with perfect smoothing.

We study the celebrated Shiryaev-Wonham filter in its historical setup of Wonham (1964) where the hidden Markov jump process has two states. We are interested in the weak noise regime for the observation equation. Interestingly, this becomes a strong noise regime for the filtering equations. Earlier results of the authors show the appearance of spikes in the filtered process, akin to a metastability phenomenon. This paper is aimed at understanding the smoothed optimal filter, which is relevant for any system with feedback. In particular, we demonstrate that there is a sharp phase transition between a spiking regime and a regime with perfect smoothing.

Authors: Bernardin Cédric, Chhaibi Reda, Najnudel Joseph, Pellegrini Clément.

2022-11-03

Spectral properties of 1D extended Hubbard model from bosonization and time-dependent variational principle: applications to 1D cuprate

Recent ARPES experiments on doped 1D cuprates revealed the importance of effective near-neighbor (NN) attractions in explaining certain features in spectral functions. From state-of-the-art TDVP calculations, we find that the spectral weights of the holon-folding and $3k_F$ branches evolve oppositely as a function of $V$. This peculiar dichotomy may be explained in bosonization analysis from the opposite dependence of exponent that determines the spectral weights on Luttinger parameter $K_{\rho}$. Recent ARPES experiments on doped 1D cuprates revealed the importance of effective near-neighbor (NN) attractions in explaining certain features in spectral functions. Here we investigate spectral properties of the extended Hubbard model with the on-site repulsion $U$ and NN interaction $V$, by employing bosonization analysis and the high-precision time-dependent variational principle (TDVP) calculations of the model on 1D chain with up to 300 sites. From state-of-the-art TDVP calculations, we find that the spectral weights of the holon-folding and $3k_F$ branches evolve oppositely as a function of $V$. This peculiar dichotomy may be explained in bosonization analysis from the opposite dependence of exponent that determines the spectral weights on Luttinger parameter $K_{\rho}$. Moreover, our TDVP calculations of models with fixed $U=8t$ and different $V$ show that $V\approx -1.7t$ may fit the experimental results best, indicating a moderate effective NN attraction in 1D cuprates that might provide some hints towards understanding superconductivity in 2D cuprates.

Authors: Hao-Xin Wang, Yi-Ming Wu, Yi-Fan Jiang, Hong Yao.

2022-11-03

Improved muon decay simulation with McMule and Geant4

The hunt for such elusive signals requires accurate simulations to characterise the detector response and estimate the experimental sensitivity.

The physics programme of the MEG II experiment can be extended with the search for new invisible particles produced in rare muon decays. The hunt for such elusive signals requires accurate simulations to characterise the detector response and estimate the experimental sensitivity. This work presents an improved simulation of muon decay in MEG II, based on McMule and Geant4.

Authors: A. Gurgone, A. Papa, P. Schwendimann, A. Signer, Y. Ulrich, A. M. Baldini, F. Cei, M. Chiappini, M. Francesconi, L. Galli, M. Grassi, D. Nicolò, G. Signorelli.

2022-11-03

Optimal transport for a global event description at high-intensity hadron colliders

The CERN Large Hadron Collider was built to uncover fundamental particles and their interactions at the energy frontier. We propose an algorithm with a greatly enhanced capability of disentangling individual proton collisions to obtain a new global event description, considerably improving over the current state-of-the-art. The CERN Large Hadron Collider was built to uncover fundamental particles and their interactions at the energy frontier. Upon entering its High Luminosity phase at the end of this decade, the unprecedented interaction rates when colliding two proton bunches will pose a significant challenge to the reconstruction algorithms of experiments such as ATLAS and CMS. We propose an algorithm with a greatly enhanced capability of disentangling individual proton collisions to obtain a new global event description, considerably improving over the current state-of-the-art. Employing a metric inspired by optimal transport problems as the cost function of a graph neural network enhanced with attention mechanisms, our algorithm is able to compare two particle collections with different noise levels, thereby learning to correctly flag particles originating from the main proton interaction amidst products from up to 200 simultaneous pileup collisions. The adoption of such an algorithm will lead to a quasi-global sensitivity improvement for searches for new physics phenomena and will enable precision measurements at the percent level in the High Luminosity era of the Large Hadron Collider.

Authors: Loukas Gouskos, Fabio Iemmi, Sascha Liechti, Benedikt Maier, Vinicius Mikuni, Huilin Qu.

2022-11-03

Neutrino Origin of LHAASO's 18 TeV GRB221009A Photon

In this paper we provide the first neutrino-related explanation of the most energetic 18 TeV event reported by LHAASO. We find that the minimal viable scenario involves both mixing and transition magnetic moment portal between light and sterile neutrinos. Our explanation of this event, while being consistent with the terrestrial constraints, points to the non-standard cosmology.

LHAASO collaboration detected photons with energy above 10 TeV from the most recent gamma-ray burst (GRB), GRB221009A. Given the redshift of this event, $z\sim 0.15$, photons of such energy are expected to interact with the diffuse extragalactic background light (EBL) well before reaching Earth. In this paper we provide the first neutrino-related explanation of the most energetic 18 TeV event reported by LHAASO. We find that the minimal viable scenario involves both mixing and transition magnetic moment portal between light and sterile neutrinos. The production of sterile neutrinos occurs efficiently via mixing while the transition magnetic moment portal governs the decay rate in the parameter space where tree-level decays via mixing to non-photon final states are suppressed. Our explanation of this event, while being consistent with the terrestrial constraints, points to the non-standard cosmology.

Authors: Vedran Brdar, Ying-Ying Li.