Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
These bounds are easy to
compute and are tight within a constant factor of $89/44$. Moreover, they are
asymptotically tight in the regimes of large deviation and moderate deviation. Our bounds significantly outperform the
familiar Chernoff bound and reverse Chernoff bounds known in the literature and
may find applications in various research areas.
We derive simple but nearly tight upper and lower bounds for the binomial
lower tail probability (with straightforward generalization to the upper tail
probability) that apply to the whole parameter regime. These bounds are easy to
compute and are tight within a constant factor of $89/44$. Moreover, they are
asymptotically tight in the regimes of large deviation and moderate deviation.
By virtue of a surprising connection with Ramanujan's equation, we also provide
strong evidences suggesting that the lower bound is tight within a factor of
$1.26434$. It may even be regarded as the natural lower bound, given its
simplicity and appealing properties. Our bounds significantly outperform the
familiar Chernoff bound and reverse Chernoff bounds known in the literature and
may find applications in various research areas.
Authors: Huangjun Zhu, Zihao Li, Masahito Hayashi.
For a given game $G$, these two settings give rise to different equilibria characterised by the sets of equilibrium correlations $Q_\textrm{corr}(G)$ and $Q(G)$, respectively. This provides a new angle towards understanding the limits and advantages of delegating quantum measurements.
We investigate what quantum advantages can be obtained in multipartite
non-cooperative games by studying how different types of quantum resources can
improve social welfare, a measure of the quality of a Nash equilibrium. We
study how these advantages in quantum social welfare depend on the bias of the
game, and improve upon the separation that was previously obtained using
pseudo-telepathic strategies.
Two different quantum settings are analysed: a first, in which players are
given direct access to an entangled quantum state, and a second, which we
introduce here, in which they are only given classical advice obtained from
quantum devices. For a given game $G$, these two settings give rise to
different equilibria characterised by the sets of equilibrium correlations
$Q_\textrm{corr}(G)$ and $Q(G)$, respectively. We show that $Q(G)\subseteq
Q_\textrm{corr}(G)$ and, by considering explicit example games and exploiting
SDP optimisation methods, provide indications of a strict separation between
the social welfare attainable in the two settings. This provides a new angle
towards understanding the limits and advantages of delegating quantum
measurements.
Authors: Alastair A. Abbott, Mehdi Mhalla, Pierre Pocreau.
High-dimensional compositional data are commonplace in the modern omics
sciences amongst others. We demonstrate the performance of
the method using both simulated and real data sets.
High-dimensional compositional data are commonplace in the modern omics
sciences amongst others. Analysis of compositional data requires a proper
choice of orthonormal coordinate representation as their relative nature is not
compatible with the direct use of standard statistical methods. Principal
balances, a specific class of log-ratio coordinates, are well suited to this
context since they are constructed in such a way that the first few coordinates
capture most of the variability in the original data. Focusing on regression
and classification problems in high dimensions, we propose a novel Partial
Least Squares (PLS) based procedure to construct principal balances that
maximize explained variability of the response variable and notably facilitates
interpretability when compared to the ordinary PLS formulation. The proposed
PLS principal balance approach can be understood as a generalized version of
common logcontrast models, since multiple orthonormal (instead of one)
logcontrasts are estimated simultaneously. We demonstrate the performance of
the method using both simulated and real data sets.
Authors: V. Nesrstová, I. Wilms, J. Palarea-Albaladejo, P. Filzmoser, J. A. Martín-Fernández, D. Friedecký, K. Hron.
The bulk-edge correspondence (BEC) is the hallmark of topological systems. How would it be further affected in non-Hermitian systems, experiencing loss and/or gain? This entails a nontrivial modification to the relative Levinson's theorem.
The bulk-edge correspondence (BEC) is the hallmark of topological systems. In
continuous (non-lattice) Hermitian systems with unbounded wavevector it was
recently shown that the BEC is modified. How would it be further affected in
non-Hermitian systems, experiencing loss and/or gain? In this work we take the
first step in this direction, by studying a Hermitian continuous system with
non-Hermitian boundary conditions. We find in this case that edge modes emerge
at roots of the scattering matrix, as opposed to the Hermitian case, where they
emerge at poles (or, more accurately, coalescence of roots and poles). This
entails a nontrivial modification to the relative Levinson's theorem. We then
show that the topological structure remains the same as in the Hermitian case,
and the generalized BEC holds, provided one employs appropriate modified
contours in the wavevector plane, so that the scattering matrix phase winding
counts the edge modes correctly. We exemplify all this using a paradigmatic
model of waves in a shallow ocean or active systems in the presence of odd
viscosity, as well as 2D electron gas with Hall viscosity. We use this
opportunity to examine the case of large odd viscosity, where the scattering
matrix becomes $2\times2$, which has not been discussed in previous works on
the Hermitian generalized BEC.
Authors: Orr Rapoport, Moshe Goldstein.
Specific capabilities are reported in the
switched channel with indefinite order, not accessible with conventional
estimation approaches with definite order. Phase estimation can be performed by
measuring the control qubit alone, although it does not actively interact with
the unitary process -- only the probe qubit doing so. Also, phase estimation
becomes possible with a fully depolarized input probe or with an input probe
aligned with the rotation axis of the unitary, while this is never possible
with conventional approaches.
A switched quantum channel with indefinite causal order is studied for the
fundamental metrological task of phase estimation on a qubit unitary operator
affected by quantum thermal noise. Specific capabilities are reported in the
switched channel with indefinite order, not accessible with conventional
estimation approaches with definite order. Phase estimation can be performed by
measuring the control qubit alone, although it does not actively interact with
the unitary process -- only the probe qubit doing so. Also, phase estimation
becomes possible with a fully depolarized input probe or with an input probe
aligned with the rotation axis of the unitary, while this is never possible
with conventional approaches. The present study extends to thermal noise,
investigations previously carried out with the more symmetric and isotropic
qubit depolarizing noise, and it contributes to the timely exploration of
properties of quantum channels with indefinite causal order relevant to quantum
signal and information processing.
Authors: Francois Chapeau-Blondeau.
We obtain the density of zero roots, surface energies and elementary excitations in different regimes of model parameters.
We study the exact physical quantities of a competing spin chain which
contains many interesting and meaningful couplings including the nearest
neighbor, next nearest neighbor, chiral three spins, Dzyloshinsky-Moriya
interactions and unparallel boundary magnetic fields in the thermodynamic
limit. We obtain the density of zero roots, surface energies and elementary
excitations in different regimes of model parameters. Due to the competition of
various interactions, the surface energy and excited spectrum show many
different pictures from those of the Heisenberg spin chain.
Authors: Pengcheng Lu, Yi Qiao, Junpeng Cao, Wen-Li Yang.
This mode is now operational and publicly
offered. We conclude with a brief
outlook on further possible instrumental developments and an estimate of the
scientific return. These numbers are only an indication and will depend strongly on
observing conditions such as lunar phase and rate of lunar limb motion.
TIRCAM2 is the facility near-infrared Imager at the Devasthal 3.6-m telescope
in northern India, equipped with an Aladdin III InSb array detector. We have
pioneered the use of TIRCAM2 for very fast photometry, with the aim of
recording Lunar Occultations (LO). This mode is now operational and publicly
offered. In this paper we describe the relevant instrumental details, we
provide references to the LO method and the underlying data analysis
procedures, and we list the LO events recorded so far. Among the results, we
highlight a few which have led to the measurement of one small-separation
binary star and of two stellar angular diameters. We conclude with a brief
outlook on further possible instrumental developments and an estimate of the
scientific return. In particular, we find that the LO technique can detect
sources down to K~ 9 mag with SNR=1 on the DOT telescope. Angular diameters
larger than ~ 1 milliarcsecond (mas) could be measured with SNR above 10, or
K~6 mag. These numbers are only an indication and will depend strongly on
observing conditions such as lunar phase and rate of lunar limb motion. Based
on statistics alone, there are several thousands LO events observable in
principle with the given telescope and instrument every year.
Authors: Saurabh Sharma, Andrea Richichi, Devendra K. Ojha, Brajesh Kumar, Milind Naik, Jeewan Rawat, Darshan S. Bora, Kuldeep Belwal, Prakash Dhami, Mohit Bisht.
Our analysis generalizes previous results in the case of classical games.
Recent advances in quantum computing and in particular, the introduction of
quantum GANs, have led to increased interest in quantum zero-sum game theory,
extending the scope of learning algorithms for classical games into the quantum
realm. In this paper, we focus on learning in quantum zero-sum games under
Matrix Multiplicative Weights Update (a generalization of the multiplicative
weights update method) and its continuous analogue, Quantum Replicator
Dynamics. When each player selects their state according to quantum replicator
dynamics, we show that the system exhibits conservation laws in a
quantum-information theoretic sense. Moreover, we show that the system exhibits
Poincare recurrence, meaning that almost all orbits return arbitrarily close to
their initial conditions infinitely often. Our analysis generalizes previous
results in the case of classical games.
Authors: Rahul Jain, Georgios Piliouras, Ryann Sim.
Manipulating bosonic condensates with electric fields is very challenging as
the electric fields do not directly interact with the neutral particles of the
condensate. For
isotropic non-resonant optical pumping we demonstrate the spontaneous formation
of vortices with topological charges of -2, -1, +1, and +2. The topological
vortex charge is controlled by a voltage in the range of 1 to 10 V applied to
the microcavity sample.
Manipulating bosonic condensates with electric fields is very challenging as
the electric fields do not directly interact with the neutral particles of the
condensate. Here we demonstrate a simple electric method to tune the vorticity
of exciton polariton condensates in a strong coupling liquid crystal (LC)
microcavity with CsPbBr$_3$ microplates as active material at room temperature.
In such a microcavity, the LC molecular director can be electrically modulated
giving control over the polariton condensation in different modes. For
isotropic non-resonant optical pumping we demonstrate the spontaneous formation
of vortices with topological charges of -2, -1, +1, and +2. The topological
vortex charge is controlled by a voltage in the range of 1 to 10 V applied to
the microcavity sample. This control is achieved by the interplay of a built-in
potential gradient, the anisotropy of the optically active perovskite
microplates, and the electrically controllable LC molecular director in our
system with intentionally broken rotational symmetry. Besides the fundamental
interest in the achieved electric polariton vortex control at room temperature,
our work paves the way to micron-sized emitters with electric control over the
emitted light's phase profile and quantized orbital angular momentum for
information processing and integration into photonic circuits.
Authors: Xiaokun Zhai, Xuekai Ma, Ying Gao, Chunzi Xing, Meini Gao, Haitao Dai, Xiao Wang, Anlian Pan, Stefan Schumacher, Tingge Gao.
Although ubiquitous, developing IoT systems remains challenging. Inquiry: A recent field study with 194 IoT developers identifies debugging as one of the main challenges faced when developing IoT systems. Furthermore, the analysis process is also time-consuming and might miss contextual information relevant to find the root cause of bugs. Approach: This paper proposes out-of-things debugging, an online debugging approach especially designed for IoT systems. The debugger is always-on as it ensures constant availability to for instance debug post-deployment situations. Upon a failure or breakpoint, out-of-things debugging moves the state of a deployed application to the developer's machine. Once debugging is finished, developers can commit bug fixes to the device through live update capabilities. Furthermore, device resources are only accessed when requested by the user which further mitigates overhead and opens avenues for mocking or simulation of non-accessed resources. Grounding: We implemented an out-of-things debugger as an extension to a WebAssembly Virtual Machine and benchmarked its suitability for IoT. From the benchmarks, we conclude that our debugger exhibits competitive performance in addition to confining overhead without sacrificing debugging convenience and flexibility.
Context: Internet of Things (IoT) has become an important kind of distributed
systems thanks to the wide-spread of cheap embedded devices equipped with
different networking technologies. Although ubiquitous, developing IoT systems
remains challenging.
Inquiry: A recent field study with 194 IoT developers identifies debugging as
one of the main challenges faced when developing IoT systems. This comes from
the lack of debugging tools taking into account the unique properties of IoT
systems such as non-deterministic data, and hardware restricted devices. On the
one hand, offline debuggers allow developers to analyse post-failure recorded
program information, but impose too much overhead on the devices while
generating such information.
Furthermore, the analysis process is also time-consuming and might miss
contextual information relevant to find the root cause of bugs. On the other
hand, online debuggers do allow debugging a program upon a failure while
providing contextual information (e.g., stack trace). In particular, remote
online debuggers enable debugging of devices without physical access to them.
However, they experience debugging interference due to network delays which
complicates bug reproducibility, and have limited support for dynamic software
updates on remote devices.
Approach: This paper proposes out-of-things debugging, an online debugging
approach especially designed for IoT systems. The debugger is always-on as it
ensures constant availability to for instance debug post-deployment situations.
Upon a failure or breakpoint, out-of-things debugging moves the state of a
deployed application to the developer's machine. Developers can then debug the
application locally by applying operations (e.g., step commands) to the
retrieved state. Once debugging is finished, developers can commit bug fixes to
the device through live update capabilities. Finally, by means of a
fine-grained flexible interface for accessing remote resources, developers have
full control over the debugging overhead imposed on the device, and the access
to device hardware resources (e.g., sensors) needed during local debugging.
Knowledge: Out-of-things debugging maintains good properties of remote
debugging as it does not require physical access to the device to debug it,
while reducing debugging interference since there are no network delays on
operations (e.g., stepping) issued on the debugger since those happen locally.
Furthermore, device resources are only accessed when requested by the user
which further mitigates overhead and opens avenues for mocking or simulation of
non-accessed resources.
Grounding: We implemented an out-of-things debugger as an extension to a
WebAssembly Virtual Machine and benchmarked its suitability for IoT. In
particular, we compared our solution to remote debugging alternatives based on
metrics such as network overhead, memory usage, scalability, and usability in
production settings. From the benchmarks, we conclude that our debugger
exhibits competitive performance in addition to confining overhead without
sacrificing debugging convenience and flexibility.
Importance: Out-of-things debugging enables debugging of IoT systems by means
of classical online operations (e.g., stepwise execution) while addressing
IoT-specific concerns (e.g., hardware limitations). We show that having the
debugger always-on does not have to come at cost of performance loss or
increased overhead but instead can enforce a smooth-going and flexible
debugging experience of IoT systems.
Authors: Carlos Rojas Castillo, Matteo Marra, Jim Bauwens, Elisa Gonzalez Boix.