Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
We introduce a variant of the Erd\"os--R\'enyi random graph where the number
of vertices is random and follows a Poisson law. A very simple Markov property
of the model entails that the Lukasiewicz exploration is made of
\textit{independent} Poisson increments.
We introduce a variant of the Erd\"os--R\'enyi random graph where the number
of vertices is random and follows a Poisson law. A very simple Markov property
of the model entails that the Lukasiewicz exploration is made of
\textit{independent} Poisson increments. Using a vanilla Poisson counting
process, this enables us to give very short proofs of classical results such as
the phase transition for the giant component or the connectedness for the
standard Erd\"os--R\'enyi model.
Authors: Nicolas Curien.
This demo will illustrate the functionality of LE3D under a real-world-like scenario.
This paper presents LE3D; a novel data drift detection framework for
preserving data integrity and confidentiality. LE3D is a generalisable platform
for evaluating novel drift detection mechanisms within the Internet of Things
(IoT) sensor deployments. Our framework operates in a distributed manner,
preserving data privacy while still being adaptable to new sensors with minimal
online reconfiguration. Our framework currently supports multiple drift
estimators for time-series IoT data and can easily be extended to accommodate
new data types and drift detection mechanisms. This demo will illustrate the
functionality of LE3D under a real-world-like scenario.
Authors: Ioannis Mavromatis, Aftab Khan.
Lenses encode protocols for synchronising systems. We continue the work begun
by Chollet et al.
Lenses encode protocols for synchronising systems. We continue the work begun
by Chollet et al. at the Applied Category Theory Adjoint School in 2020 to
study the properties of the category of small categories and asymmetric delta
lenses. The forgetful functor from the category of lenses to the category of
functors is already known to reflect monos and epis and preserve epis; we show
that it preserves monos, and give a simpler proof that it preserves epis.
Together this gives a complete characterisation of the monic and epic lenses in
terms of elementary properties of their get functors.
Next, we initiate the study of coequalisers of lenses. We observe that not
all parallel pairs of lenses have coequalisers, and that the forgetful functor
from the category of lenses to the category of functors neither preserves nor
reflects all coequalisers. However, some coequalisers are reflected; we study
when this occurs, and then use what we learned to show that every epic lens is
regular, and that discrete opfibrations have pushouts along monic lenses.
Corollaries include that every monic lens is effective, every monic epic lens
is an isomorphism, and the class of all epic lenses and the class of all monic
lenses form an orthogonal factorisation system.
Authors: Matthew Di Meglio.
For the data-based methods, they perform very fast on new test data once they have been trained. However, their generalization ability is always insufficient. In this paper, we propose a fast model-based HSI denoising approach. Extensive experiments on mixed noise removal demonstrate the superiority of the proposed method both in denoising performance and denoising speed compared with other state-of-the-art methods.
Mining structural priors in data is a widely recognized technique for
hyperspectral image (HSI) denoising tasks, whose typical ways include
model-based methods and data-based methods. The model-based methods have good
generalization ability, while the runtime cannot meet the fast processing
requirements of the practical situations due to the large size of an HSI data $
\mathbf{X} \in \mathbb{R}^{MN\times B}$. For the data-based methods, they
perform very fast on new test data once they have been trained. However, their
generalization ability is always insufficient. In this paper, we propose a fast
model-based HSI denoising approach. Specifically, we propose a novel
regularizer named Representative Coefficient Total Variation (RCTV) to
simultaneously characterize the low rank and local smooth properties. The RCTV
regularizer is proposed based on the observation that the representative
coefficient matrix $\mathbf{U}\in\mathbb{R}^{MN\times R} (R\ll B)$ obtained by
orthogonally transforming the original HSI $\mathbf{X}$ can inherit the strong
local-smooth prior of $\mathbf{X}$. Since $R/B$ is very small, the HSI
denoising model based on the RCTV regularizer has lower time complexity.
Additionally, we find that the representative coefficient matrix $\mathbf{U}$
is robust to noise, and thus the RCTV regularizer can somewhat promote the
robustness of the HSI denoising model. Extensive experiments on mixed noise
removal demonstrate the superiority of the proposed method both in denoising
performance and denoising speed compared with other state-of-the-art methods.
Remarkably, the denoising speed of our proposed method outperforms all the
model-based techniques and is comparable with the deep learning-based
approaches.
Authors: Jiangjun Peng, Hailin Wang, Xiangyong Cao, Xinlin Liu, Xiangyu Rui, Deyu Meng.
Development of task guidance systems for aiding humans in a situated task
remains a challenging problem. We then provide an overview of
existing datasets available and highlight their limitations. We finally develop
a model-in-the-loop wizard-of-oz based data collection tool and perform a pilot
experiment.
Development of task guidance systems for aiding humans in a situated task
remains a challenging problem. The role of search (information retrieval) and
conversational systems for task guidance has immense potential to help the task
performers achieve various goals. However, there are several technical
challenges that need to be addressed to deliver such conversational systems,
where common supervised approaches fail to deliver the expected results in
terms of overall performance, user experience and adaptation to realistic
conditions. In this preliminary work we first highlight some of the challenges
involved during the development of such systems. We then provide an overview of
existing datasets available and highlight their limitations. We finally develop
a model-in-the-loop wizard-of-oz based data collection tool and perform a pilot
experiment.
Authors: Ramesh Manuvinakurike, Sovan Biswas, Giuseppe Raffa, Richard Beckwith, Anthony Rhodes, Meng Shi, Gesem Gudino Mejia, Saurav Sahay, Lama Nachman.
Then as the lattice constant or the thickness of the slab is adjusted the accidental BICs will merge. Finally the heterostructure PhC cavity is designed. The merged BICs show a high quality factor for the photonic crystal with finite size.
In this work, a doubly resonant photonic crystal (PhC) cavity using the
merged bound states in the continuum (BICs) is proposed to obtain a higher
second harmonic generation (SHG) efficiency. Firstly by scanning geometry
parameters the accidental BICs and a band-edge mode outside the light cone can
be obtained. Then as the lattice constant or the thickness of the slab is
adjusted the accidental BICs will merge. A supercell with large and small holes
is constructed and the band-edge mode outside the light cone can be
mode-matched with the merged BICs mode. Finally the heterostructure PhC cavity
is designed. The merged BICs show a high quality factor for the photonic
crystal with finite size. Consequently, the SHG efficiency of the lattice
constant near merged BICs of ~6000% W-1 is higher than the one of the isolated
BIC.
Authors: Rui Ge, Xiangmin Liu, Xiongshuo Yan, Xianfeng Chen, Yuping Chen.
To
this end, we find a smooth approximation of the inverse of the function that
describes such a nonlinearity. Moreover, we provide an analysis of the performance of the proposed controller
near the equilibrium. We conclude this paper by experimentally validating the
results on a two degrees-of-freedom planar manipulator.
This manuscript introduces a passivity-based control methodology for
fully-actuated mechanical systems with symmetric or asymmetric dead-zones. To
this end, we find a smooth approximation of the inverse of the function that
describes such a nonlinearity. Then, we propose an energy and damping injection
approach - based on the PI-PBC technique - that compensates for the dead-zone.
Moreover, we provide an analysis of the performance of the proposed controller
near the equilibrium. We conclude this paper by experimentally validating the
results on a two degrees-of-freedom planar manipulator.
Authors: Carmen Chan-Zheng, Pablo Borja, Jacquelien M. A Scherpen.
We compare these observational results to the prediction of models. This rules out milder processes such as starvation.
Using a compilation of Halpha fluxes for 384 star forming galaxies detected
during the VESTIGE survey, we study several important scaling relations for a
complete sample of galaxies in a rich environment. The extraordinary
sensitivity of the data allows us to sample the whole dynamic range of the
Halpha luminosity function, from massive (M*~10^11 Mo) to dwarf systems
(M*~10^6 Mo). This extends previous works to a dynamic range in stellar mass
and star formation rate (10^-4<SFR<10 Mo yr^-1) never explored so far. The main
sequence (MS) relation derived for all star forming galaxies within one virial
radius of the Virgo cluster has a slope comparable to that observed in other
nearby samples of isolated objects, but has a dispersion ~3 times larger. The
dispersion is tightly connected to the available amount of HI gas, with
gas-poor systems located far below objects of similar stellar mass but with a
normal HI content. When measured on unperturbed galaxies with a normal HI gas
content, the relation has a slope a=0.92, an intercept b=-1.57, and a scatter
~0.40. We compare these observational results to the prediction of models. The
observed scatter in the MS relation can be reproduced only after a violent and
active stripping process such as ram-pressure that removes gas from the disc
and quenches star formation on short (<1 Gyr) timescales. This rules out milder
processes such as starvation. This interpretation is also consistent with the
position of galaxies of different star formation activity and gas content
within the phase-space diagram. We also show that the star forming regions
formed in the stripped material outside perturbed galaxies are located well
above the MS relation drawn by unperturbed systems. These HII regions, which
might be at the origin of compact sources typical in rich environments, are
living a starburst phase lasting only <50 Myr, later becoming quiescent
systems.
Authors: A. Boselli, M. Fossati, J. Roediger, M. Boquien, M. Fumagalli, M. Balogh, S. Boissier, J. Braine, L. Ciesla, P. Côté, J. C. Cuillandre, L. Ferrarese, G. Gavazzi, S. Gwyn, Junais, G. Hensler, A. Longobardi, M. Sun.
Such collective states are
accessible to bosonic and fermionic systems.
Strongly correlated quantum particles in lattice potentials are the building
blocks for a large variety of quantum insulators, for instance Mott phases and
density waves breaking the lattice symmetry. Such collective states are
accessible to bosonic and fermionic systems. To expand further the spectrum of
accessible quantum matter phases, mixing both species is theoretically
appealing, since density order then competes with phase separation. Here we
manipulate such Bose-Fermi mixture by confining neutral (boson-like) and
charged (fermion-like) dipolar excitons in an artificial square lattice of a
GaAs bilayer. At unitary lattice filling, strong inter- and intra-species
interactions stabilise insulating phases when the fraction of charged excitons
is around (1/3, 1/2, 2/3). We evidence that dual Bose-Fermi density waves are
then realised, with species ordered in alternating stripes. Our observations
highlight that dipolar excitons allow for controlled implementations of
Bose-Fermi Hubbard models extended by off-site interactions.
Authors: Camille Lagoin, Stephan Suffit, Kirk Baldwin, Loren Pfeiffer, Francois Dubin.
A notable feature of non-Hermitian systems with skin effects is the sensitivity of their spectra and eigenstates to the boundary conditions. In the literature, three types of boundary conditions-periodic boundary condition,open boundary condition and a defect in the system as a boundary, are explored. In this work we introduce the other type of boundary condition provided by a giant atom. This bipolar localization leads to Bloch-like states, even though translational invariance is broken. We also show that the Lyapunov exponent in the long-time dynamics in real space can act as a witness of the localized bulk states.
A notable feature of non-Hermitian systems with skin effects is the
sensitivity of their spectra and eigenstates to the boundary conditions. In the
literature, three types of boundary conditions-periodic boundary condition,open
boundary condition and a defect in the system as a boundary, are explored. In
this work we introduce the other type of boundary condition provided by a giant
atom. The giant atom couples to a nonreciprocal SuSchrieffer-Heeger chain at
two points and plays the role of defects. We study the spectrum and
localization of eigenstates of the system and find that the giant atom can
induce asymmetric zero modes. A remarkable feature is that bulk states might
localize at the left or the right chain-atom coupling sites in weak
localization regimes. This bipolar localization leads to Bloch-like states,
even though translational invariance is broken. Moreover, we find that the
localization is obviously weaker than the case with two small atoms or open
boundary conditions even in strong coupling regimes. These intriguing results
indicate that nonlocal coupling of giant atom to a nonreciprocal SSH chain
weakens localization of the eigenstates. We also show that the Lyapunov
exponent in the long-time dynamics in real space can act as a witness of the
localized bulk states.
Authors: Junjie Wang, Fude Li, X. X. Yi.