Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
We show here that
the suggested signatures can have sizable rates, reaching the pb level, when
the top quark is produced on-shell and $\tan\beta$ is small.
In this study, we investigate the light charged Higgs boson production via
$pp \rightarrow H^\pm bj$ in the type-I configuration of the Two-Higgs Doublet
Model (2HDM). In the viable parameter space, assuming either $h$ or $H$ is the
SM-like Higgs boson observed at the Large Hadron Collider (LHC), we focus on
the bosonic decays $H^\pm \rightarrow W^\pm A/h$ and examine the final states
arising from the above charged Higgs production and decay. We show here that
the suggested signatures can have sizable rates, reaching the pb level, when
the top quark is produced on-shell and $\tan\beta$ is small.
Authors: R. Benbrik, M. Krab, B. Manaut, M. Ouchemhou.
Chem. A}$ (2023).
We present a dynamical simulation scheme to model the highly correlated
excited state dynamics of linear polyenes. We apply it to investigate the
internal conversion processes of carotenoids following their photoexcitation.
We use the extended Hubbard-Peierls model, $\hat{H}_{\textrm{UVP}}$, to
describe the $\pi$-electronic system coupled to nuclear degrees of freedom
supplemented by a Hamiltonian, $\hat{H}_{\epsilon}$, that explicitly breaks
both the particle-hole and two-fold rotation symmetries of idealized
carotenoids. The electronic degrees of freedom are treated quantum mechanically
by solving the time-dependent Schr\"odinger equation using the adaptive
time-dependent DMRG (tDMRG) method, while nuclear dynamics are treated via the
Ehrenfest equations of motion. By defining adiabatic excited states as the
eigenstates of the full Hamiltonian,
$\hat{H}=\hat{H}_{\textrm{UVP}}+\hat{H}_{\epsilon}$, and diabatic excited
states as eigenstates of $\hat{H}_{\textrm{UVP}}$, we present a computational
framework to monitor the internal conversion process from the initial
photoexcited state to the singlet triplet-pair states of carotenoids. We
further incorporate Lanczos-DMRG to the tDMRG-Ehrenfest method to calculate
transient absorption spectra from the evolving photoexcited state. We describe
the accuracy and convergence criteria for DMRG, and show that this method
accurately describes the dynamics of carotenoid excited states. We also discuss
the effect of $\hat{H}_{\epsilon}$ on the internal conversion process, and show
that its effect on the extent of internal conversion can be described by a
Landau-Zener-type transition. This methodological paper is a companion to our
more explanatory discussion of carotenoid excited state dynamics in,
$\textit{Photoexcited state dynamics and singlet fission in carotenoids}$, D.
Manawadu, T. N. Georges and W. Barford, $\textit{J. Phys. Chem. A}$ (2023).
Authors: Dilhan Manawadu, Darren J. Valentine, William Barford.
If your socks come out of the laundry all mixed up, how should you sort them? We give an enumeration involving Fibonacci
numbers for the $1$-foot-sortable sock orderings within a naturally-arising
class.
If your socks come out of the laundry all mixed up, how should you sort them?
We introduce and study a novel foot-sorting algorithm that uses feet to attempt
to sort a sock ordering; one can view this algorithm as an analogue of Knuth's
stack-sorting algorithm for set partitions. The sock orderings that can be
sorted using a fixed number of feet are characterized by Klazar's notion of set
partition pattern containment. We give an enumeration involving Fibonacci
numbers for the $1$-foot-sortable sock orderings within a naturally-arising
class. We also prove that if you have socks of $n$ different colors, then you
can always sort them using at most $\left\lceil\log_2(n)\right\rceil$ feet, and
we use a Ramsey-theoretic argument to show that this bound is tight.
Authors: Colin Defant, Noah Kravitz.
We demonstrate how Hahn et al.
We demonstrate how Hahn et al.'s Bayesian Causal Forests model (BCF) can be
used to estimate conditional average treatment effects for the longitudinal
dataset in the 2022 American Causal Inference Conference Data Challenge.
Unfortunately, existing implementations of BCF do not scale to the size of the
challenge data. Therefore, we developed flexBCF -- a more scalable and flexible
implementation of BCF -- and used it in our challenge submission. We
investigate the sensitivity of our results to two ad hoc modeling choices we
made during our initial submission: (i) the choice of propensity score
estimation method and (ii) the use of sparsity-inducing regression tree priors.
While we found that our overall point predictions were not especially sensitive
to these modeling choices, we did observe that running BCF with flexibly
estimated propensity scores often yielded better-calibrated uncertainty
intervals.
Authors: Ajinkya H. Kokandakar, Hyunseung Kang, Sameer K. Deshpande.
In mammalian cells, in particular, cellular vesicles move across
the cell using motor proteins that carry the vesicle down the cytoskeleton to
their destination. Analysing this motion quantitatively, we observe
a displacement distribution that is roughly Gaussian for shorter distances
($\lesssim$ 0.05 $\mu$m) but which exhibits exponentially decaying tails at
longer distances (up to 0.40 $\mu$m). We show that this behaviour is
well-described by a model originally developed to describe the motion in glassy
systems. Overall, we demonstrate the ubiquity of glass-like
motion in mammalian cells, providing a different perspective on intracellular
motion.
The physics of how molecules, organelles, and foreign objects move within
living cells has been extensively studied in organisms ranging from bacteria to
human cells. In mammalian cells, in particular, cellular vesicles move across
the cell using motor proteins that carry the vesicle down the cytoskeleton to
their destination. We have recently noted several similarities between the
motion of such vesicles and that in disordered, "glassy", systems, but it
remains unclear whether that is a general observation or something specific to
certain vesicles in one particular cell type. Here we follow the motion of
mitochondria, the organelles responsible for cell energy production, in several
mammalian cell types over timescales ranging from 50 ms up to 70 s. Qualitative
observations show that single mitochondria remain stalled, remaining within a
spatially limited region, for extended periods of time, before moving longer
distances relatively quickly. Analysing this motion quantitatively, we observe
a displacement distribution that is roughly Gaussian for shorter distances
($\lesssim$ 0.05 $\mu$m) but which exhibits exponentially decaying tails at
longer distances (up to 0.40 $\mu$m). We show that this behaviour is
well-described by a model originally developed to describe the motion in glassy
systems. These observations are extended to in total 3 different objects
(mitochondria, lysosomes and nano-sized beads enclosed in vesicles), 3
different mammalian cell types, from 2 different organisms (human and mouse).
We provide further evidence that supports glass-like characteristics of the
motion by showing a difference between the time it takes to move a longer
distance for the first time and subsequent times, as well as a weak ergodicity
breaking of the motion. Overall, we demonstrate the ubiquity of glass-like
motion in mammalian cells, providing a different perspective on intracellular
motion.
Authors: B. Corci, O. Hooiveld, A. M. Dolga, C. Åberg.
It is proved that the modified energy dissipation law is unconditionally preserved at discrete levels. The proof involves the tools of discrete orthogonal convolution (DOC) kernels and inequality zoom. Finally, numerical examples are provided to verify our theoretical analysis and the algorithm efficiency.
An adaptive implicit-explicit (IMEX) BDF2 scheme is investigated on
generalized SAV approach for the Cahn-Hilliard equation by combining with
Fourier spectral method in space. It is proved that the modified energy
dissipation law is unconditionally preserved at discrete levels. Under a mild
ratio restriction, i.e., \Ass{1}: $0<r_k:=\tau_k/\tau_{k-1}< r_{\max}\approx
4.8645$, we establish a rigorous error estimate in $H^1$-norm and achieve
optimal second-order accuracy in time. The proof involves the tools of discrete
orthogonal convolution (DOC) kernels and inequality zoom. It is worth noting
that the presented adaptive time-step scheme only requires solving one linear
system with constant coefficients at each time step. In our analysis, the
first-consistent BDF1 for the first step does not bring the order reduction in
$H^1$-norm. The $H^1$ bound of the numerical solution under periodic boundary
conditions can be derived without any restriction (such as zero mean of the
initial data). Finally, numerical examples are provided to verify our
theoretical analysis and the algorithm efficiency.
Authors: Yifan Wei, Jiwei Zhang, Chengchao Zhao, Yanmin Zhao.
We propose an SRAM
based arbitrary waveform generator for cryogenic control of spin qubits. The waveform
sequence from a control processor can be stored in an SRAM memory array, which
can be programmed in real time. The waveform pattern is converted to microwave
pulses by a source-series-terminated digital to analog converter. Total power consumption of the AWG is 40-140mW at 4
K, depending upon the baud rate.
Realization of qubit gate sequences require coherent microwave control pulses
with programmable amplitude, duration, spacing and phase. We propose an SRAM
based arbitrary waveform generator for cryogenic control of spin qubits. We
demonstrate in this work, the cryogenic operation of a fully programmable radio
frequency arbitrary waveform generator in 14 nm FinFET technology. The waveform
sequence from a control processor can be stored in an SRAM memory array, which
can be programmed in real time. The waveform pattern is converted to microwave
pulses by a source-series-terminated digital to analog converter. The chip is
operational at 4 K, capable of generating an arbitrary envelope shape at the
desired carrier frequency. Total power consumption of the AWG is 40-140mW at 4
K, depending upon the baud rate. A wide signal band of 1-17 GHz is measured at
4 K, while multiple qubit control can be achieved using frequency division
multiplexing at an average spurious free dynamic range of 40 dB. This work
paves the way to optimal qubit control and closed loop feedback control, which
is necessary to achieve low latency error mitigation
Authors: Mridula Prathapan, Peter Mueller, Christian Menolfi, Matthias Braendli, Marcel Kossel, Pier Andrea Francese, David Heim, Maria Vittoria Oropallo, Andrea Ruffino, Cezar Zota, Thomas Morf.
A major challenge in applying such methods in practice is the lack of both theoretically principled and practical tools for model selection and evaluation. We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
In offline reinforcement learning (RL), a learner leverages prior logged data
to learn a good policy without interacting with the environment. A major
challenge in applying such methods in practice is the lack of both
theoretically principled and practical tools for model selection and
evaluation. To address this, we study the problem of model selection in offline
RL with value function approximation. The learner is given a nested sequence of
model classes to minimize squared Bellman error and must select among these to
achieve a balance between approximation and estimation error of the classes. We
propose the first model selection algorithm for offline RL that achieves
minimax rate-optimal oracle inequalities up to logarithmic factors. The
algorithm, ModBE, takes as input a collection of candidate model classes and a
generic base offline RL algorithm. By successively eliminating model classes
using a novel one-sided generalization test, ModBE returns a policy with regret
scaling with the complexity of the minimally complete model class. In addition
to its theoretical guarantees, it is conceptually simple and computationally
efficient, amounting to solving a series of square loss regression problems and
then comparing relative square loss between classes. We conclude with several
numerical simulations showing it is capable of reliably selecting a good model
class.
Authors: Jonathan N. Lee, George Tucker, Ofir Nachum, Bo Dai, Emma Brunskill.
This
answers a question of Liu. Moreover, our techniques give a power improvement
for a larger class of graphs than cubes. This latter result is tight.
In 1964, Erd\H{o}s proposed the problem of estimating the Tur\'an number of
the $d$-dimensional hypercube $Q_d$. Since $Q_d$ is a bipartite graph with
maximum degree $d$, it follows from results of F\"uredi and Alon, Krivelevich,
Sudakov that $\mathrm{ex}(n,Q_d)=O_d(n^{2-1/d})$. A recent general result of
Sudakov and Tomon implies the slightly stronger bound
$\mathrm{ex}(n,Q_d)=o(n^{2-1/d})$. We obtain the first power-improvement for
this old problem by showing that
$\mathrm{ex}(n,Q_d)=O_d(n^{2-\frac{1}{d-1}+\frac{1}{(d-1)2^{d-1}}})$. This
answers a question of Liu. Moreover, our techniques give a power improvement
for a larger class of graphs than cubes.
We use a similar method to prove that any $n$-vertex, properly edge-coloured
graph without a rainbow cycle has at most $O(n(\log n)^2)$ edges, improving the
previous best bound of $n(\log n)^{2+o(1)}$ by Tomon. Furthermore, we show that
any properly edge-coloured $n$-vertex graph with $\omega(n\log n)$ edges
contains a cycle which is almost rainbow: that is, almost all edges in it have
a unique colour. This latter result is tight.
Authors: Oliver Janzer, Benny Sudakov.
We find a rich phenomenology of quantum dephasing processes which can be interpreted in classical terms. For non-Markovian processes, classicality can only be proven in the fully compatible case, thus revealing a key difference between Markovian and non-Markovian pure dephasing processes.
We analyze the multitime statistics associated with pure dephasing systems
repeatedly probed with sharp measurements, and search for measurement protocols
whose statistics satisfies the Kolmogorov consistency conditions possibly up to
a finite order. We find a rich phenomenology of quantum dephasing processes
which can be interpreted in classical terms. In particular, if the underlying
dephasing process is Markovian, we find sufficient conditions under which
classicality at every order can be found: this can be reached by choosing the
dephasing and measurement basis to be fully compatible or fully incompatible,
that is, mutually unbiased bases (MUBs). For non-Markovian processes,
classicality can only be proven in the fully compatible case, thus revealing a
key difference between Markovian and non-Markovian pure dephasing processes.
Authors: Davide Lonigro, Dariusz Chruściński.