Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
We prove this conjecture.
Let $\mathrm{pm}(G)$ denote the number of perfect matchings of a graph $G$,
and let $K_{r\times 2n/r}$ denote the complete $r$-partite graph where each
part has size $2n/r$. Johnson, Kayll, and Palmer conjectured that for any
perfect matching $M$ of $K_{r\times 2n/r}$, we have for $2n$ divisible by $r$
\[\frac{\mathrm{pm}(K_{r\times 2n/r}-M)}{\mathrm{pm}(K_{r\times 2n/r})}\sim
e^{-r/(2r-2)}.\] This conjecture can be viewed as a common generalization of
counting the number of derangements on $n$ letters, and of counting the number
of deranged matchings of $K_{2n}$. We prove this conjecture. In fact, we prove
the stronger result that if $R$ is a uniformly random perfect matching of
$K_{r\times 2n/r}$, then the number of edges that $R$ has in common with $M$
converges to a Poisson distribution with parameter $\frac{r}{2r-2}$.
Authors: Sam Spiro, Erlang Surya.
Data from both sensors are typically fused in a common reference frame prior to use in downstream perception tasks. During the life cycle of an AV, these calibration parameters may change. The ability to perform in-situ spatiotemporal calibration is essential to ensure reliable long-term operation. Our approach leverages the ability of the radar unit to measure its own ego-velocity relative to a fixed external reference frame. We analyze the identifiability of the spatiotemporal calibration problem and determine the motions necessary for calibration. Through a series of simulation studies, we characterize the sensitivity of our algorithm to measurement noise.
Autonomous vehicles (AVs) often depend on multiple sensors and sensing
modalities to mitigate data degradation and provide a measure of robustness
when operating in adverse conditions. Radars and cameras are a popular sensor
combination - although radar measurements are sparse in comparison to camera
images, radar scans are able to penetrate fog, rain, and snow. Data from both
sensors are typically fused in a common reference frame prior to use in
downstream perception tasks. However, accurate sensor fusion depends upon
knowledge of the spatial transform between the sensors and any temporal
misalignment that exists in their measurement times. During the life cycle of
an AV, these calibration parameters may change. The ability to perform in-situ
spatiotemporal calibration is essential to ensure reliable long-term operation.
State-of-the-art 3D radar-camera spatiotemporal calibration algorithms require
bespoke calibration targets, which are not readily available in the field. In
this paper, we describe an algorithm for targetless spatiotemporal calibration
that is able to operate without specialized infrastructure. Our approach
leverages the ability of the radar unit to measure its own ego-velocity
relative to a fixed external reference frame. We analyze the identifiability of
the spatiotemporal calibration problem and determine the motions necessary for
calibration. Through a series of simulation studies, we characterize the
sensitivity of our algorithm to measurement noise. Finally, we demonstrate
accurate calibration for three real-world systems, including a handheld sensor
rig and a vehicle-mounted sensor array. Our results show that we are able to
match the performance of an existing, target-based method, while calibrating in
arbitrary (infrastructure-free) environments.
Authors: Emmett Wise, Qilong Cheng, Jonathan Kelly.
Rev.
We study numerically the impact of many-body interactions on the quantum
boomerang effect. We consider various cases: weakly interacting bosons, the
Tonks-Girardeau gas, and strongly interacting bosons (which may be mapped onto
weakly interacting fermions). Numerical simulations are performed using the
time-evolving block decimation algorithm, a quasi-exact method based on matrix
product states. In the case of weakly interacting bosons, we find a partial
destruction of the quantum boomerang effect, in agreement with the earlier
mean-field study [Phys. Rev. A \textbf{102}, 013303 (2020)]. For the
Tonks-Girardeau gas, we show the presence of the full quantum boomerang effect.
For strongly interacting bosons, we observe a partial boomerang effect. We show
that the destruction of the quantum boomerang effect is universal and does not
depend on the details of the interaction between particles.
Authors: Jakub Janarek, Jakub Zakrzewski, Dominique Delande.
The state of many physical, biological and socio-technical systems evolves by combining smooth local transitions and abrupt resetting events to a set of reference values. The inclusion of the resetting mechanism not only provides the possibility of modeling a wide variety of realistic systems but also leads to interesting novel phenomenology not present in reset-free cases. Coupled multiparticle systems subjected to resetting are a necessary generalization in the theory of stochastic resetting, and the model presented herein serves as an illustrative, natural and solvable example of such a generalization.
The state of many physical, biological and socio-technical systems evolves by
combining smooth local transitions and abrupt resetting events to a set of
reference values. The inclusion of the resetting mechanism not only provides
the possibility of modeling a wide variety of realistic systems but also leads
to interesting novel phenomenology not present in reset-free cases. However,
most models where stochastic resetting is studied address the case of a finite
number of uncorrelated variables, commonly a single one, such as the position
of non-interacting random walkers. Here we overcome this limitation by framing
the process of network growth with node deletion as a stochastic resetting
problem where an arbitrarily large number of degrees of freedom are coupled and
influence each other, both in the resetting and non-resetting (growth) events.
We find the exact, full-time solution of the model, and several
out-of-equilibrium properties are characterized as function of the growth and
resetting rates, such as the emergence of a time-dependent percolation-like
phase transition, and first-passage statistics. Coupled multiparticle systems
subjected to resetting are a necessary generalization in the theory of
stochastic resetting, and the model presented herein serves as an illustrative,
natural and solvable example of such a generalization.
Authors: Oriol Artime.
The lines have
equivalent widths ranging from 2.3 to 26.9 m\r{A}. We performed a spectral
synthesis analysis to determine the cesium content in the atmosphere. Non-LTE
atmosphere models were computed by considering cesium explicitly in the
calculations.
We report the first detection of cesium (Z = 55) in the atmosphere of a white
dwarf. Around a dozen absorption lines of Cs IV, Cs V, and Cs VI have been
identified in the Far Ultraviolet Spectroscopic Explorer spectrum of the
He-rich white dwarf HD 149499B (Teff = 49,500 K, log g = 7.97). The lines have
equivalent widths ranging from 2.3 to 26.9 m\r{A}. We performed a spectral
synthesis analysis to determine the cesium content in the atmosphere. Non-LTE
atmosphere models were computed by considering cesium explicitly in the
calculations. For this purpose we calculated oscillator strengths for the
bound-bound transitions of Cs IV-Cs VI with both AUTOSTRUCTURE
(multiconfiguration Breit-Pauli) and GRASP2K (multiconfiguration Dirac-Fock)
atomic structure codes as neither measured nor theoretical values are reported
in the literature. We determined a cesium abundance of log N(Cs)/N(He) =
-5.45(0.35), which can also be expressed in terms of the mass fraction log
X(Cs) = -3.95(0.35).
Authors: P. Chayer, C. Mendoza, M. Meléndez, J. Deprince, J. Dupuis.
Atherosclerosis is an inflammatory disease characterised by the formation of plaques, which are deposits of lipids and cholesterol-laden macrophages that form in the artery wall. The inflammation is often non-resolving, due in large part to changes in normal macrophage anti-inflammatory behaviour that are induced by the toxic plaque microenvironment. These changes include higher death rates, defective efferocytic uptake of dead cells, and reduced rates of emigration. We find that high rates of cell death relative to efferocytic uptake results in a plaque populated mostly by dead cells.
Atherosclerosis is an inflammatory disease characterised by the formation of
plaques, which are deposits of lipids and cholesterol-laden macrophages that
form in the artery wall. The inflammation is often non-resolving, due in large
part to changes in normal macrophage anti-inflammatory behaviour that are
induced by the toxic plaque microenvironment. These changes include higher
death rates, defective efferocytic uptake of dead cells, and reduced rates of
emigration. We develop a free boundary multiphase model for early
atherosclerotic plaques, and we use it to investigate the effects of impaired
macrophage anti-inflammatory behaviour on plaque structure and growth. We find
that high rates of cell death relative to efferocytic uptake results in a
plaque populated mostly by dead cells. We also find that emigration can
potentially slow or halt plaque growth by allowing material to exit the plaque,
but this is contingent on the availability of live macrophage foam cells in the
deep plaque. Finally, we introduce an additional bead species to model
macrophage tagging via microspheres, and we use the extended model to explore
how high rates of cell death and low rates of efferocytosis and emigration
prevent the clearance of macrophages from the plaque.
Authors: Ishraq U. Ahmed, Helen M. Byrne, Mary R. Myerscough.
Deep learning vision systems are widely deployed across applications where
reliability is critical. While
existing benchmarks surface examples challenging for models, they do not
explain why such mistakes arise. transformer vs. convolutional, (2) learning paradigm, e.g. Regardless of these choices, we find models have consistent
failure modes across ImageNet-X categories. We also find that while data
augmentation can improve robustness to certain factors, they induce spill-over
effects to other factors. For example, strong random cropping hurts robustness
on smaller objects.
Deep learning vision systems are widely deployed across applications where
reliability is critical. However, even today's best models can fail to
recognize an object when its pose, lighting, or background varies. While
existing benchmarks surface examples challenging for models, they do not
explain why such mistakes arise. To address this need, we introduce ImageNet-X,
a set of sixteen human annotations of factors such as pose, background, or
lighting the entire ImageNet-1k validation set as well as a random subset of
12k training images. Equipped with ImageNet-X, we investigate 2,200 current
recognition models and study the types of mistakes as a function of model's (1)
architecture, e.g. transformer vs. convolutional, (2) learning paradigm, e.g.
supervised vs. self-supervised, and (3) training procedures, e.g., data
augmentation. Regardless of these choices, we find models have consistent
failure modes across ImageNet-X categories. We also find that while data
augmentation can improve robustness to certain factors, they induce spill-over
effects to other factors. For example, strong random cropping hurts robustness
on smaller objects. Together, these insights suggest to advance the robustness
of modern vision models, future research should focus on collecting additional
data and understanding data augmentation schemes. Along with these insights, we
release a toolkit based on ImageNet-X to spur further study into the mistakes
image recognition systems make.
Authors: Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdzal, David Lopez-Paz, Mark Ibrahim.
This generalizes a result of Guillemin and Kazhdan to the setting of magnetic flows.
Let $M$ be a closed surface and let $\{g_s \ | \ s \in (-\epsilon,
\epsilon)\}$ be a smooth one-parameter family of Riemannian metrics on $M$.
Also let $\{\kappa_s : M \rightarrow \mathbb{R} \ | \ s \in (-\epsilon,
\epsilon)\}$ be a smooth one-parameter family of functions on $M$. Then the
family $\{(g_s, \kappa_s) \ | \ s \in (-\epsilon, \epsilon)\}$ gives rise to a
family of magnetic flows on $TM$. We show that if the magnetic curvatures are
negative for $s \in (-\epsilon, \epsilon)$ and the lengths of each periodic
orbit remains constant as the parameter $s$ varies, then there exists a smooth
family of diffeomorphisms $\{f_s : M \rightarrow M \ | \ s \in (-\epsilon,
\epsilon)\}$ such that $f_s^*(g_s) = g_0$ and $f_s^*(\kappa_s) = \kappa_0$.
This generalizes a result of Guillemin and Kazhdan to the setting of magnetic
flows.
Authors: James Marshall Reber.
The main cause of the asymmetries
appears to be binding energy differences between the mirror systems.
Beta decays of mirror nuclei differ in Q-value, but will otherwise proceed
with transitions of similar strength. The current status is reviewed: Fermi
transitions are all very similar, whereas Gamow-Teller transitions can differ
in strength by more than a factor two. The main cause of the asymmetries
appears to be binding energy differences between the mirror systems.
Authors: K. Riisager.
Edge and fog computing are considered as the key enablers for applications where centralized cloud-based solutions are not suitable. We further discuss their interactions and collaborations in many applications such as cloud offloading, smart cities, health care, and smart agriculture. Though there are still challenges in the development of such distributed systems, early research to tackle those limitations have also surfaced.
With the rapid growth of the Internet of Things (IoT) and a wide range of
mobile devices, the conventional cloud computing paradigm faces significant
challenges (high latency, bandwidth cost, etc.). Motivated by those constraints
and concerns for the future of the IoT, modern architectures are gearing toward
distributing the cloud computational resources to remote locations where most
end-devices are located. Edge and fog computing are considered as the key
enablers for applications where centralized cloud-based solutions are not
suitable. In this paper, we review the high-level definition of edge, fog,
cloud computing, and their configurations in various IoT scenarios. We further
discuss their interactions and collaborations in many applications such as
cloud offloading, smart cities, health care, and smart agriculture. Though
there are still challenges in the development of such distributed systems,
early research to tackle those limitations have also surfaced.
Authors: Thong Vo, Pranjal Dave, Gaurav Bajpai, Rasha Kashef.