Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
The radius of the circular hole is
given by WIF ~ 3.8 pc [Zlay/0.1pc]^{-1/6} [nlay/10^4cm^{-3}]^{-1/3}
[NdotLyC/10^{49} s^{-1}]^{1/6} [t/Myr]^{2/3}. We suggest that our model might be a useful zeroth-order representation
of many observed HIIRs. From intermediate
viewing angles more complicated morphologies can be expected.
We develop a simple analytic model for what happens when an O star (or
compact cluster of OB stars) forms in a shock compressed layer and carves out
an approximately circular hole in the layer, at the waist of a bipolar HII
Region (HIIR). The model is characterised by three parameters: the
half-thickness of the undisturbed layer, Zlay, the mean number-density of
hydrogen molecules in the undisturbed layer, nlay, and the (collective)
ionising output of the star(s), NdotLyC. The radius of the circular hole is
given by WIF ~ 3.8 pc [Zlay/0.1pc]^{-1/6} [nlay/10^4cm^{-3}]^{-1/3}
[NdotLyC/10^{49} s^{-1}]^{1/6} [t/Myr]^{2/3}. Similar power-law expressions are
obtained for the rate at which ionised gas is fed into the bipolar lobes; the
rate at which molecular gas is swept up into a dense ring by the shock front
(SF) that precedes the ionisation front (IF); and the density in this dense
ring. We suggest that our model might be a useful zeroth-order representation
of many observed HIIRs. From viewing directions close to the midplane of the
layer, the HIIR will appear bipolar. From viewing directions approximately
normal to the layer it will appear to be a limb-brightened shell but too faint
through the centre to be a spherically symmetric bubble. From intermediate
viewing angles more complicated morphologies can be expected.
Authors: Anthony Whitworth, Felix Priestley, Samuel Geen.
The realization of chiral Majorana modes is a challenging task. We show that both the superconducting proximity effect and the tunability of the chemical potential for the topological surface states are significantly influenced by the gate-induced electrostatic potential.
The realization of chiral Majorana modes is a challenging task. We aim to
comprehend the phase diagrams and parameter control capabilities of the actual
devices used in the chiral Majorana search. Beyond the well-known minimal
models, we develop a numerical simulation scheme using a self-consistent
Schrodinger-Poisson approach to study, as an example, the MnBi2Te4 thin film
coupled to an s-wave superconductor. We show that both the superconducting
proximity effect and the tunability of the chemical potential for the
topological surface states are significantly influenced by the gate-induced
electrostatic potential. This complicates the implementation in experiments,
and the actual topological region will be narrowed in stark contrast to those
predicted in the previous minimal models. Nevertheless, we demonstrate that the
chiral Majorana mode still exists in a wide range of experimental parameters
with practical tunability.
Authors: Li Chen, Zhan Cao, Ke He, Xin Liu, Dong E. Liu.
Using the tools of nonlinear optimal
control theory, we show that a surprisingly simple closed-form feedback control
law solves this minimum-time escape problem, and that the minimum-time paths
have an elegant geometric interpretation.
We investigate the problem of finding paths that enable a robot modeled as a
Dubins car (i.e., a constant-speed finite-turn-rate unicycle) to escape from a
circular region of space in minimum time. This minimum-time escape problem
arises in marine, aerial, and ground robotics in situations where a safety
region has been violated and must be exited before a potential negative
consequence occurs (e.g., a collision). Using the tools of nonlinear optimal
control theory, we show that a surprisingly simple closed-form feedback control
law solves this minimum-time escape problem, and that the minimum-time paths
have an elegant geometric interpretation.
Authors: Timothy L. Molloy, Iman Shames.
This paper introduces the steps involved in creating this corpus, including the choice of speech-related parameters and speech lists as well as the recording technique.
In this study, we present a speech corpus of patients with chronic kidney
disease (CKD) that will be used for research on pathological voice analysis,
automatic illness identification, and severity prediction. This paper
introduces the steps involved in creating this corpus, including the choice of
speech-related parameters and speech lists as well as the recording technique.
The speakers in this corpus, 289 CKD patients with varying degrees of severity
who were categorized based on estimated glomerular filtration rate (eGFR),
delivered sustained vowels, sentence, and paragraph stimuli. This study
compared and analyzed the voice characteristics of CKD patients with those of
the control group; the results revealed differences in voice quality,
phoneme-level pronunciation, prosody, glottal source, and aerodynamic
parameters.
Authors: Jihyun Mun, Sunhee Kim, Myeong Ju Kim, Jiwon Ryu, Sejoong Kim, Minhwa Chung.
Acoustic-based fault detection has a high potential to monitor the health
condition of mechanical parts. Limited attention has been paid to improving the robustness of fault detection
against industrial environmental noise. An acoustic array is used to acquire data from motors with a
minor fault, major fault, or which are healthy. A benchmark is provided to
compare the psychoacoustic features with different types of envelope features
based on expert knowledge of the gearbox. The
best-performing approaches achieve an area under curve of 0.87 (logarithm
envelope), 0.86 (time-varying psychoacoustics), and 0.91 (combination of both).
Acoustic-based fault detection has a high potential to monitor the health
condition of mechanical parts. However, the background noise of an industrial
environment may negatively influence the performance of fault detection.
Limited attention has been paid to improving the robustness of fault detection
against industrial environmental noise. Therefore, we present the Lenze
production background-noise (LPBN) real-world dataset and an automated and
noise-robust auditory inspection (ARAI) system for the end-of-line inspection
of geared motors. An acoustic array is used to acquire data from motors with a
minor fault, major fault, or which are healthy. A benchmark is provided to
compare the psychoacoustic features with different types of envelope features
based on expert knowledge of the gearbox. To the best of our knowledge, we are
the first to apply time-varying psychoacoustic features for fault detection. We
train a state-of-the-art one-class-classifier, on samples from healthy motors
and separate the faulty ones for fault detection using a threshold. The
best-performing approaches achieve an area under curve of 0.87 (logarithm
envelope), 0.86 (time-varying psychoacoustics), and 0.91 (combination of both).
Authors: Peter Wißbrock, Yvonne Richter, David Pelkmann, Zhao Ren, Gregory Palmer.
Under these conditions, the equilibrium is shown to always exist and be often different from the Nash and Stackelberg equilibria. When the commitment is observed subject to a distortion, the equilibrium does not necessarily exist. Nonetheless, the leader might still obtain some benefit in some specific cases subject to equilibrium refinements.
In this paper, $2 \times 2$ zero-sum games (ZSGs) are studied under the
following assumptions: (1) One of the players (the leader) publicly and
irrevocably commits to choose its actions by sampling a given probability
measure (strategy);(2) The leader announces its action, which is observed by
its opponent (the follower) through a binary channel; and (3) the follower
chooses its strategy based on the knowledge of the leader's strategy and the
noisy observation of the leader's action. Under these conditions, the
equilibrium is shown to always exist and be often different from the Nash and
Stackelberg equilibria. Even subject to noise, observing the actions of the
leader is either beneficial or immaterial to the follower for all possible
commitments. When the commitment is observed subject to a distortion, the
equilibrium does not necessarily exist. Nonetheless, the leader might still
obtain some benefit in some specific cases subject to equilibrium refinements.
For instance, $\epsilon$-equilibria might exist in which the leader commits to
suboptimal strategies that allow unequivocally predicting the best response of
its opponent.
Authors: Ke Sun, Samir M. Perlaza, Alain Jean-Marie.
This class comprises, among others, the
Schwarzschild, Kasner, Einstein-Rosen and gravitational pulse wave solutions.
Explicit solutions to the non-linear field equations of gravitational
theories can be obtained, using a Riemann-Hilbert approach, from an appropriate
factorization of certain matrix functions. Here we study a class of solutions,
which we call diagonal canonical solutions, obtained via Wiener-Hopf
factorization of diagonal matrices. This class comprises, among others, the
Schwarzschild, Kasner, Einstein-Rosen and gravitational pulse wave solutions.
We show that new solutions can be obtained by a method of meromorphic
deformation of the diagonal canonical solutions and that we can define Abelian
groups of such solutions indexed by a curve in the complex plane.
Authors: M. Cristina Câmara, Gabriel Lopes Cardoso.
Oper. 2019].
In the Vertex Triangle 2-Club problem, we are given an undirected graph $G$
and aim to find a maximum-vertex subgraph of $G$ that has diameter at most 2
and in which every vertex is contained in at least $\ell$ triangles in the
subgraph. So far, the only algorithm for solving Vertex Triangle 2-Club relies
on an ILP formulation [Almeida and Br\'as, Comput. Oper. Res. 2019]. In this
work, we develop a combinatorial branch-and-bound algorithm that, coupled with
a set of data reduction rules, outperforms the existing implementation and is
able to find optimal solutions on sparse real-world graphs with more than
100,000 vertices in a few minutes. We also extend our algorithm to the Edge
Triangle 2-Club problem where the triangle constraint is imposed on all edges
of the subgraph.
Authors: Niels Grüttemeier, Christian Komusiewicz, Philipp Heinrich Keßler, Frank Sommer.
Maps play a key role in rapidly developing area of autonomous driving. We believe that high levels of situation
awareness require a 3D representation as well as the inclusion of semantic
information.
Maps play a key role in rapidly developing area of autonomous driving. We
survey the literature for different map representations and find that while the
world is three-dimensional, it is common to rely on 2D map representations in
order to meet real-time constraints. We believe that high levels of situation
awareness require a 3D representation as well as the inclusion of semantic
information. We demonstrate that our recently presented hierarchical 3D grid
mapping framework UFOMap meets the real-time constraints. Furthermore, we show
how it can be used to efficiently support more complex functions such as
calculating the occluded parts of space and accumulating the output from a
semantic segmentation network.
Authors: Ajinkya Khoche, Maciej K Wozniak, Daniel Duberg, Patric Jensfelt.
The vertex cover problem is a fundamental and widely studied combinatorial optimization problem. It is known that its standard linear programming relaxation is integral for bipartite graphs and half-integral for general graphs. Equivalently, we suppose that we have access to a subset of vertices $S$ whose removal bipartizes the graph.
The vertex cover problem is a fundamental and widely studied combinatorial
optimization problem. It is known that its standard linear programming
relaxation is integral for bipartite graphs and half-integral for general
graphs. As a consequence, the natural rounding algorithm based on this
relaxation computes an optimal solution for bipartite graphs and a
$2$-approximation for general graphs. This raises the question of whether one
can obtain improved bounds on the approximation ratio, depending on how close
the graph is to being bipartite.
In this paper, we consider a round-and-bipartize algorithm that exploits the
knowledge of an induced bipartite subgraph to attain improved approximation
ratios. Equivalently, we suppose that we have access to a subset of vertices
$S$ whose removal bipartizes the graph.
If $S$ is an independent set, we prove an approximation ratio of $1 +
1/\rho$, where $2\rho -1$ denotes the odd girth of the contracted graph
$\tilde{\mathcal{G}} := \mathcal{G} /S$ and thus satisfies $\rho \geq 2$. We
show that this is tight for any graph and independent set by providing a family
of weight functions for which this bound is attained. In addition, we give
tight upper bounds for the fractional chromatic number and the integrality gap
of such graphs, both of which also depend on the odd girth.
If $S$ is an arbitrary set, we prove a tight approximation ratio of
$\left(1+1/\rho \right) (1 - \alpha) + 2 \alpha$, where $\alpha \in [0,1]$
denotes the total normalized dual sum of the edges lying inside of the set $S$.
As an algorithmic application, we show that for any efficiently $k$-colorable
graph with $k \geq 4$ we can find a bipartizing set satisfying $\alpha \leq 1 -
4/k$. This provides an approximation algorithm recovering the bound of $2 -
2/k$ in the worst case (i.e., when $\rho = 2$), which is best possible for this
setting when using the standard relaxation.
Authors: Danish Kashaev, Guido Schäfer.