Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
An RIS is introduced in the ISAC system
to mitigate the effects of coarse quantization and to enable the co-existence
between sensing and communication functionalities. Specifically, we design
transmit precoder to obtain 1-bit sensing waveforms having a desired radiation
pattern. The RIS phase shifts are then designed to modulate the 1-bit sensing
waveform to transmit M-ary phase shift keying symbols to users.
In this paper, we present a novel reconfigurable intelligent surface
(RIS)-assisted integrated sensing and communication (ISAC) system with 1-bit
quantization at the ISAC base station. An RIS is introduced in the ISAC system
to mitigate the effects of coarse quantization and to enable the co-existence
between sensing and communication functionalities. Specifically, we design
transmit precoder to obtain 1-bit sensing waveforms having a desired radiation
pattern. The RIS phase shifts are then designed to modulate the 1-bit sensing
waveform to transmit M-ary phase shift keying symbols to users. Through
numerical simulations, we show that the proposed method offers significantly
improved symbol error probabilities when compared to MIMO communication systems
having quantized linear precoders, while still offering comparable sensing
performance as that of unquantized sensing systems.
Authors: R. S. Prasobh Sankar, Sundeep Prabhakar Chepuri.
The ratio of occupation probabilities of our walk to that of an Infinite walk shows a scaling behavior of $\frac{1}{n^2}$. It shows a definite scaling behavior with number of shifts $s$ also.
In this work, we study the effect of a moving detector on a discrete time one
dimensional Quantum Random Walk where the movement is realized in the form of
hopping/shifts. The occupation probability $f(x,t;n,s)$ is estimated as the
number of detection $n$ and number of shift $s$ vary. It is seen that the
occupation probability at the initial position $x_D$ of the detector is
enhanced when $n$ is small which a quantum mechanical effect but decreases when
$n$ is large. The ratio of occupation probabilities of our walk to that of an
Infinite walk shows a scaling behavior of $\frac{1}{n^2}$. It shows a definite
scaling behavior with number of shifts $s$ also. The limiting behaviours of the
walk are observed when $x_D$ is large, $n$ is large and $s$ is large and the
walker for these cases approach the Infinite Walk, The Semi Infinite Walk and
the Quenched Quantum Walk respectively.
Authors: Md Aquib Molla, Sanchari Goswami.
DyOb-SLAM creates
two separate maps for both static and dynamic contents. For the static
features, a sparse map is obtained. For the dynamic contents, a trajectory
global map is created as output. As a result, a frame to frame real-time based
dynamic object tracking system is obtained. With the pose calculation of the
dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic
objects with time.
Simultaneous Localization & Mapping (SLAM) is the process of building a
mutual relationship between localization and mapping of the subject in its
surrounding environment. With the help of different sensors, various types of
SLAM systems have developed to deal with the problem of building the
relationship between localization and mapping. A limitation in the SLAM process
is the lack of consideration of dynamic objects in the mapping of the
environment. We propose the Dynamic Object Tracking SLAM (DyOb-SLAM), which is
a Visual SLAM system that can localize and map the surrounding dynamic objects
in the environment as well as track the dynamic objects in each frame. With the
help of a neural network and a dense optical flow algorithm, dynamic objects
and static objects in an environment can be differentiated. DyOb-SLAM creates
two separate maps for both static and dynamic contents. For the static
features, a sparse map is obtained. For the dynamic contents, a trajectory
global map is created as output. As a result, a frame to frame real-time based
dynamic object tracking system is obtained. With the pose calculation of the
dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic
objects with time. The performance of DyOb-SLAM is observed by comparing it
with a similar Visual SLAM system, VDO-SLAM and the performance is measured by
calculating the camera and object pose errors as well as the object speed
error.
Authors: Rushmian Annoy Wadud, Wei Sun.
Gibbsian structure in random point fields has been a classical tool for studying their spatial properties. Our framework entails conditions that may be verified via finite particle approximations to the process, a phenomenon that we call an approximate Gibbs property. We demonstrate the scope of our approach by showing that a generalized Gibbs property holds with a logarithmic pair potential for the $\alpha$-GAFs for any value of $\alpha$.
Gibbsian structure in random point fields has been a classical tool for
studying their spatial properties. However, exact Gibbs property is available
only in a relatively limited class of models, and it does not adequately
address many random fields with a strongly dependent spatial structure. In this
work, we provide a general framework for approximate Gibbsian structure for
strongly correlated random point fields. These include processes that exhibit
strong spatial rigidity, in particular, a certain one-parameter family of
analytic Gaussian zero point fields, namely the $\alpha$-GAFs. Our framework
entails conditions that may be verified via finite particle approximations to
the process, a phenomenon that we call an approximate Gibbs property. We show
that these enable one to compare the spatial conditional measures in the
infinite volume limit with Gibbs-type densities supported on appropriate
singular manifolds, a phenomenon we refer to as a generalized Gibbs property.
We demonstrate the scope of our approach by showing that a generalized Gibbs
property holds with a logarithmic pair potential for the $\alpha$-GAFs for any
value of $\alpha$. This establishes the level of rigidity of the $\alpha$-GAF
zero process to be exactly $\lfloor \frac{1}{\alpha} \rfloor$, settling in the
affirmative an open question regarding the existence of point processes with
any specified level of rigidity. For processes such as the zeros of
$\alpha$-GAFs, which involve complex, many-body interactions, our results imply
that the local behaviour of the random points still exhibits 2D Coulomb-type
repulsion in the short range. Our techniques can be leveraged to estimate the
relative energies of configurations under local perturbations, with possible
implications for dynamics and stochastic geometry on strongly correlated random
point fields.
Authors: Ujan Gangopadhyay, Subhroshekhar Ghosh.
Hence, we need to design model selection
techniques that do not explicitly rely on counterfactual data. However, the effectiveness of these metrics has only been studied on synthetic
datasets as we can observe the counterfactual data for them. We conduct an
extensive empirical analysis to judge the performance of these metrics, where
we utilize the latest advances in generative modeling to incorporate multiple
realistic datasets. We evaluate 9 metrics on 144 datasets for selecting between
415 estimators per dataset, including datasets that closely mimic real-world
datasets.
We study the problem of model selection in causal inference, specifically for
the case of conditional average treatment effect (CATE) estimation under binary
treatments. Unlike model selection in machine learning, we cannot use the
technique of cross-validation here as we do not observe the counterfactual
potential outcome for any data point. Hence, we need to design model selection
techniques that do not explicitly rely on counterfactual data. As an
alternative to cross-validation, there have been a variety of proxy metrics
proposed in the literature, that depend on auxiliary nuisance models also
estimated from the data (propensity score model, outcome regression model).
However, the effectiveness of these metrics has only been studied on synthetic
datasets as we can observe the counterfactual data for them. We conduct an
extensive empirical analysis to judge the performance of these metrics, where
we utilize the latest advances in generative modeling to incorporate multiple
realistic datasets. We evaluate 9 metrics on 144 datasets for selecting between
415 estimators per dataset, including datasets that closely mimic real-world
datasets. Further, we use the latest techniques from AutoML to ensure
consistent hyperparameter selection for nuisance models for a fair comparison
across metrics.
Authors: Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis.
The DNA methylation process has been extensively studied for its role in cancer. There is a lack of suitable methods for modelling the beta values in their innate form. The DMCs are identified via multiple t-tests but this can be computationally expensive. The family of BMMs employs different parameter constraints that are applicable to different study settings. An R package betaclust is provided to facilitate the use of the developed BMMs.
The DNA methylation process has been extensively studied for its role in
cancer. Promoter cytosine-guanine dinucleotide (CpG) island hypermethylation
has been shown to silence tumour suppressor genes. Identifying the
differentially methylated CpG (DMC) sites between benign and tumour samples can
help understand the disease. The EPIC microarray quantifies the methylation
level at a CpG site as a beta value which lies within [0,1). There is a lack of
suitable methods for modelling the beta values in their innate form. The DMCs
are identified via multiple t-tests but this can be computationally expensive.
Also, arbitrary thresholds are often selected and used to identify the
methylation state of a CpG site. We propose a family of novel beta mixture
models (BMMs) which use a model-based clustering approach to cluster the CpG
sites in their innate beta form to (i) objectively identify methylation state
thresholds and (ii) identify the DMCs between different samples. The family of
BMMs employs different parameter constraints that are applicable to different
study settings. Parameter estimation proceeds via an EM algorithm, with a novel
approximation during the M-step providing tractability and computational
feasibility. Performance of the BMMs is assessed through a thorough simulation
study, and the BMMs are used to analyse a prostate cancer dataset and an
esophageal squamous cell carcinoma dataset. The BMM approach objectively
identifies methylation state thresholds and identifies more DMCs between the
benign and tumour samples in both cancer datasets than conventional methods, in
a computationally efficient manner. The empirical cumulative distribution
function of the DMCs related to genes implicated in carcinogenesis indicates
hypermethylation of CpG sites in the tumour samples in both cancer settings. An
R package betaclust is provided to facilitate the use of the developed BMMs.
Authors: Koyel Majumdar, Romina Silva, Antoinette Sabrina Perry, Ronald William Watson, Thomas Brendan Murphy, Isobel Claire Gormley.
The surface components are colored by elements of the Frobenius
algebra. Such
presentations have essentially been stated previously in work by the author and
Asaeda-Frohman, using ad-hoc arguments. But they appear naturally on the
background of the Bar-Natan functor and associated categorical considerations. We discuss in general presentations of colimit modules for functors into module
categories in terms of minimal terminal sets of objects of the category in the
categorical setting.
The author defined for each (commutative) Frobenius algebra a skein module of
surfaces in a $3$-manifold $M$ bounding a closed $1$-manifold $\alpha \subset
\partial M$. The surface components are colored by elements of the Frobenius
algebra. The modules are called the Bar-Natan modules of $(M,\alpha )$. In this
article we show that Bar-Natan modules are colimit modules of functors
associated to Frobenius algebras, decoupling topology from algebra. The
functors are defined on a category of $3$-dimensional compression bordisms
embedded in cylinders over $M$ and take values in a linear category defined
from the Frobenius algebra. The relation with the $1+1$-dimensional topological
quantum field theory functor associated to the Frobenius algebra is studied. We
show that the geometric content of the skein modules is contained in a
tunneling graph of $(M,\alpha )$, providing a natural presentation of the
Bar-Natan module by application of the functor defined from the algebra. Such
presentations have essentially been stated previously in work by the author and
Asaeda-Frohman, using ad-hoc arguments. But they appear naturally on the
background of the Bar-Natan functor and associated categorical considerations.
We discuss in general presentations of colimit modules for functors into module
categories in terms of minimal terminal sets of objects of the category in the
categorical setting. We also introduce a $2$-category version of the Bar-Natan
functor, thereby in some way Bar-Natan modules of $(M,\alpha )$.
Authors: Uwe Kaiser.
Scientists collaborate through intricate networks, which impact the quality and scope of their research. At the same time, funding and institutional arrangements, as well as scientific and political cultures, affect the structure of collaboration networks. Since such arrangements and cultures differ across regions in the world in systematic ways, we surmise that collaboration networks and impact should also differ systematically across regions. We find that prominent researchers in Europe establish denser collaboration networks, whereas those in North-America establish more decentralized networks.
Scientists collaborate through intricate networks, which impact the quality
and scope of their research. At the same time, funding and institutional
arrangements, as well as scientific and political cultures, affect the
structure of collaboration networks. Since such arrangements and cultures
differ across regions in the world in systematic ways, we surmise that
collaboration networks and impact should also differ systematically across
regions. To test this, we compare the structure of collaboration networks among
prominent researchers in North America and Europe. We find that prominent
researchers in Europe establish denser collaboration networks, whereas those in
North-America establish more decentralized networks. We also find that the
impact of the publications of prominent researchers in North America is
significantly higher than for those in Europe, both when they collaborate with
other prominent researchers and when they do not. Although Europeans
collaborate with other prominent researchers more often, which increases their
impact, we also find that repeated collaboration among prominent researchers
decreases the synergistic effect of collaborating.
Authors: Lluis Danus, Carles Muntaner, Alexander Krauss, Marta Sales-Pardo, Roger Guimera.
All such studies find a strong preference for either very
long-lived or very short-lived dark matter.
A large number of studies, all using Bayesian parameter inference from Markov
Chain Monte Carlo methods, have constrained the presence of a decaying dark
matter component. All such studies find a strong preference for either very
long-lived or very short-lived dark matter. However, in this letter, we
demonstrate that this preference is due to parameter volume effects that drive
the model towards the standard $\Lambda$CDM model, which is known to provide a
good fit to most observational data.
Using profile likelihoods, which are free from volume effects, we instead
find that the best-fitting parameters are associated with an intermediate
regime where around $3 \%$ of cold dark matter decays just prior to
recombination. With two additional parameters, the model yields an overall
preference over the $\Lambda$CDM model of $\Delta \chi^2 \approx -2.8$ with
\textit{Planck} and BAO and $\Delta \chi^2 \approx -7.8$ with the SH0ES $H_0$
measurement, while only slightly alleviating the $H_0$ tension. Ultimately, our
results reveal that decaying dark matter is more viable than previously
assumed, and illustrate the dangers of relying exclusively on Bayesian
parameter inference when analysing extensions to the $\Lambda$CDM model.
Authors: Emil Brinch Holm, Laura Herold, Steen Hannestad, Andreas Nygaard, Thomas Tram.
Achieving this fundamental bound with realistic probes, i.e. experimentally amenable, remains an open problem.
The heat capacity $\mathcal{C}$ of a given probe is a fundamental quantity
that determines, among other properties, the maximum precision in temperature
estimation. In turn, $\mathcal{C}$ is limited by a quadratic scaling with the
number of constituents of the probe, which provides a fundamental limit in
quantum thermometry. Achieving this fundamental bound with realistic probes,
i.e. experimentally amenable, remains an open problem. In this work, we exploit
machine-learning techniques to discover optimal spin-network thermal probes,
restricting ourselves to two-body interactions. This leads to simple
architectures, which we show analytically to approximate the theoretical
maximal value of $\mathcal{C}$ and maintain the optimal scaling for short- and
long-range interactions. Our models can be encoded in currently available
quantum annealers, and find application in other tasks requiring Hamiltonian
engineering, ranging from quantum heat engines to adiabatic Grover's search.
Authors: Paolo Abiuso, Paolo Andrea Erdman, Michael Ronen, Frank Noé, Géraldine Haack, Martí Perarnau-Llobet.