Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
The Self-Optimization (SO) model is a useful computational model for
investigating self-organization in "soft" Artificial life (ALife) as it has
been shown to be general enough to model various complex adaptive systems. So
far, existing work has been done on relatively small network sizes, precluding
the investigation of novel phenomena that might emerge from the complexity
arising from large numbers of nodes interacting in interconnected networks.
The Self-Optimization (SO) model is a useful computational model for
investigating self-organization in "soft" Artificial life (ALife) as it has
been shown to be general enough to model various complex adaptive systems. So
far, existing work has been done on relatively small network sizes, precluding
the investigation of novel phenomena that might emerge from the complexity
arising from large numbers of nodes interacting in interconnected networks.
This work introduces a novel implementation of the SO model that scales as
$\mathcal{O}\left(N^{2}\right)$ with respect to the number of nodes $N$, and
demonstrates the applicability of the SO model to networks with system sizes
several orders of magnitude higher than previously was investigated. Removing
the prohibitive computational cost of the naive $\mathcal{O}\left(N^{3}\right)$
algorithm, our on-the-fly computation paves the way for investigating
substantially larger system sizes, allowing for more variety and complexity in
future studies.
Authors: Natalya Weber, Werner Koch, Tom Froese.
We find the source to be significantly polarized in the 2--6 keV band. The polarization of the ionized reflection is unconstrained.
We report on the Imaging X-ray Polarimetry Explorer (IXPE) observation of the
closest and X-ray brightest Compton-thick active galactic nucleus (AGN), the
Circinus galaxy. We find the source to be significantly polarized in the 2--6
keV band. From previous studies, the X-ray spectrum is known to be dominated by
reflection components, both neutral (torus) and ionized (ionization cones). Our
analysis indicates that the polarization degree is $28 \pm 7$ per cent (at 68
per cent confidence level) for the neutral reflector, with a polarization angle
of $18{\deg} \pm 5{\deg}$, roughly perpendicular to the radio jet. The
polarization of the ionized reflection is unconstrained. A comparison with
Monte Carlo simulations of the polarization expected from the torus shows that
the neutral reflector is consistent with being an equatorial torus with a
half-opening angle of 45{\deg}-55{\deg}. This is the first X-ray polarization
detection in a Seyfert galaxy, demonstrating the power of X-ray polarimetry in
probing the geometry of the circumnuclear regions of AGNs, and confirming the
basic predictions of standard Unification Models.
Authors: F. Ursini, A. Marinucci, G. Matt, S. Bianchi, F. Marin, H. L. Marshall, R. Middei, J. Poutanen, A. De Rosa, L. Di Gesu, J. A. García, A. Ingram, D. E. Kim, H. Krawczynski, S. Puccetti, P. Soffitta, J. Svoboda, F. Tombesi, M. C. Weisskopf, T. Barnouin, M. Perri, J. Podgorny, A. Ratheesh, A. Zaino, I. Agudo, L. A. Antonelli, M. Bachetti, L. Baldini, W. H. Baumgartner, R. Bellazzini, S. D. Bongiorno, R. Bonino, A. Brez, N. Bucciantini, F. Capitanio, S. Castellano, E. Cavazzuti, S. Ciprini, E. Costa, E. Del Monte, N. Di Lalla, A. Di Marco, I. Donnarumma, V. Doroshenko, M. Dovčiak, S. R. Ehlert, T. Enoto, Y. Evangelista, S. Fabiani, R. Ferrazzoli, S. Gunji, J. Heyl, W. Iwakiri, S. G. Jorstad, V. Karas, T. Kitaguchi, J. J. Kolodziejczak, F. La Monaca, L. Latronico, I. Liodakis, S. Maldera, A. Manfreda, A. P. Marscher, I. Mitsuishi, T. Mizuno, F. Muleri, C. Y. Ng, S. L. O'Dell, N. Omodei, C. Oppedisano, A. Papitto, G. G. Pavlov, A. L. Peirson, M. Pesce-Rollins, P. -O. Petrucci, M. Pilia, A. Possenti, B. D. Ramsey, J. Rankin, R. W. Romani, C. Sgrò, P. Slane, G. Spandre, T. Tamagawa, F. Tavecchio, R. Taverna, Y. Tawara, A. F. Tennant, N. E. Thomas, A. Trois, S. S. Tsygankov, R. Turolla, J. Vink, K. Wu, F. Xie, S. Zane.
We present an in-depth empirical analysis of the trade-off between model
complexity and representation error in modelling vehicle trajectories. This finding allows the formulation of
trajectory tracking and prediction as a Bayesian filtering problem.
We present an in-depth empirical analysis of the trade-off between model
complexity and representation error in modelling vehicle trajectories.
Analyzing several large public datasets, we show that simple linear models do
represent realworld trajectories with high fidelity over relevant time scales
at very moderate model complexity. This finding allows the formulation of
trajectory tracking and prediction as a Bayesian filtering problem. Using an
Empirical Bayes approach, we estimate prior distributions over model parameters
from the data that inform the motion models necessary in the trajectory
tracking problem and that can help regularize prediction models. We argue for
the use of linear models in trajectory prediction tasks as their representation
error is much smaller than the typical epistemic uncertainty in this task.
Authors: Yue Yao, Daniel Goehring, Joerg Reichardt.
Causes and solutions are discussed. In this study, SPIDER was operated in hydrogen and deuterium, in Cs-free conditions.
The Neutral beam Injectors of the ITER experiment will be based on negative
ion sources for the generation of beams composed by 1 MeV H/D particles. The
prototype of these sources is currently under testing in the SPIDER experiment,
part of the Neutral Beam Test Facility of Consorzio RFX, Padua. Among the
targets of the experimentation in SPIDER, it is of foremost importance to
maximize the beam current density extracted from the source acceleration
system. The SPIDER operating conditions can be optimized thanks to a Cavity
Ring-down Spectroscopy diagnostic, which is able to give line-integrated
measurements of negative ion density in proximity of the acceleration system
apertures. Regarding the diagnostic technique, this work presents a phenomenon
of drift in ring down time measurements, which develops in a time scale of few
hours. This issue may significantly affect negative ion density measurements
for plasma pulses of 1 h duration, as required by ITER. Causes and solutions
are discussed. Regarding the source performance, this paper presents how
negative ion density is influenced by the RF power used to sustain the plasma,
and by the magnetic filter field present in SPIDER to limit the amount of
co-extracted electrons. In this study, SPIDER was operated in hydrogen and
deuterium, in Cs-free conditions.
Authors: M. Barbisan, R. Agnello, G. Casati, R. Pasqualotto, E. Sartori, G. Serianni.
By
combining position-based and CMD-based selections, we built an updated
catalogue of H\alpha-excess candidates in the northern Galactic Plane. In addition, we explore the distribution
of our spectroscopically confirmed emitters in the Gaia CMD.
State-of-the-art techniques to identify H\alpha emission line sources in
narrow-band photometric surveys consist of searching for H\alpha excess with
reference to nearby objects in the sky (position-based selection). However,
while this approach usually yields very few spurious detections, it may fail to
select intrinsically faint and/or rare H\alpha-excess sources. In order to
obtain a more complete representation of the heterogeneous emission line
populations, we recently developed a technique to find outliers relative to
nearby objects in the colour-magnitude diagram (CMD-based selection). By
combining position-based and CMD-based selections, we built an updated
catalogue of H\alpha-excess candidates in the northern Galactic Plane. Here we
present spectroscopic follow-up observations and classification of 114 objects
from this catalogue, that enable us to test our novel selection method. Out of
the 70 spectroscopically confirmed H\alpha emitters in our sample, 15 were
identified only by the CMD-based selection, and would have been thus missed by
the classic position-based technique. In addition, we explore the distribution
of our spectroscopically confirmed emitters in the Gaia CMD. This information
can support the classification of emission line sources in large surveys, such
as the upcoming WEAVE and 4MOST, especially if augmented with the introduction
of other colours.
Authors: M. Fratta, S. Scaringi, M. Monguió, A. F. Pala, J. E. Drew, C. Knigge, K. A. Iłkiewicz, P. Gandhi.
The calculations show that no $\Lambda nn$ bound state exists, but predict a low-lying $\Lambda nn$ resonant state near the threshold with the energy of $E_r= 0.124$ MeV and the width of about $\Gamma=1.161$ MeV.
We perform the ab initio no-core shell model (NCSM) calculation to
investigate the bound state problem of the three-body $\Lambda nn$ system in
chiral next-to-next-to-leading-order NN and chiral leading-order YN
interactions. The calculations show that no $\Lambda nn$ bound state exists,
but predict a low-lying $\Lambda nn$ resonant state near the threshold with the
energy of $E_r= 0.124$ MeV and the width of about $\Gamma=1.161$ MeV. In
searching for $\Lambda nn$ resonances, we extend the NCSM calculation to the
continuum state by employing the J-matrix formalism in the scattering theory
with the hyperspherical oscillator basis.
Authors: Thiri Yadanar Htun, Yupeng Yan.
Legal practitioners often face a vast amount of documents. Lawyers, for
instance, search for appropriate precedents favorable to their clients, while
the number of legal precedents is ever-growing. This also makes their statistical analysis challenging.
Legal practitioners often face a vast amount of documents. Lawyers, for
instance, search for appropriate precedents favorable to their clients, while
the number of legal precedents is ever-growing. Although legal search engines
can assist finding individual target documents and narrowing down the number of
candidates, retrieved information is often presented as unstructured text and
users have to examine each document thoroughly which could lead to information
overloading. This also makes their statistical analysis challenging. Here, we
present an end-to-end information extraction (IE) system for legal documents.
By formulating IE as a generation task, our system can be easily applied to
various tasks without domain-specific engineering effort. The experimental
results of four IE tasks on Korean precedents shows that our IE system can
achieve competent scores (-2.3 on average) compared to the rule-based baseline
with as few as 50 training examples per task and higher score (+5.4 on average)
with 200 examples. Finally, our statistical analysis on two case
categories--drunk driving and fraud--with 35k precedents reveals the resulting
structured information from our IE system faithfully reflects the macroscopic
features of Korean legal system.
Authors: Wonseok Hwang, Saehee Eom, Hanuhl Lee, Hai Jin Park, Minjoon Seo.
We discuss polarization of gravitational radiation within the standard framework of linearized general relativity. Observational possibilities regarding polarization-dependent effects in connection with future gravitational wave detectors are briefly explored.
We discuss polarization of gravitational radiation within the standard
framework of linearized general relativity. The recent experimental discovery
of gravitational waves provides the impetus to revisit the implications of the
spin-rotation-gravity coupling for polarized gravitational radiation;
therefore, we consider the coupling of helicity of gravitational waves to the
rotation of an observer or the gravitomagnetic field of a rotating astronomical
source. Observational possibilities regarding polarization-dependent effects in
connection with future gravitational wave detectors are briefly explored.
Authors: Bahram Mashhoon, Sohrab Rahvar.
We then compute the group of connected components of
the fiber at $p$ of the N\'eron model of their Jacobians.
We give regular models for modular curves associated with (normalizer of)
split and non-split Cartan subgroups of ${\mathrm{GL}}_2 ({\mathbb F}_p )$ (for
$p$ any prime, $p\ge 5$). We then compute the group of connected components of
the fiber at $p$ of the N\'eron model of their Jacobians.
Authors: Bas Edixhoven, Pierre Parent.
We endow each of these sets with a geometric structure, inducing the notions of closeness and symmetries, by turning them into a vertex set of an appropriate metagraph. We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon. We prove a hardness result, showing that in this case, exact kernel computation cannot be performed efficiently. However, we propose a simple Monte Carlo approximation for handling moderately sized cases.
We propose a principled way to define Gaussian process priors on various sets
of unweighted graphs: directed or undirected, with or without loops. We endow
each of these sets with a geometric structure, inducing the notions of
closeness and symmetries, by turning them into a vertex set of an appropriate
metagraph. Building on this, we describe the class of priors that respect this
structure and are analogous to the Euclidean isotropic processes, like squared
exponential or Mat\'ern. We propose an efficient computational technique for
the ostensibly intractable problem of evaluating these priors' kernels, making
such Gaussian processes usable within the usual toolboxes and downstream
applications. We go further to consider sets of equivalence classes of
unweighted graphs and define the appropriate versions of priors thereon. We
prove a hardness result, showing that in this case, exact kernel computation
cannot be performed efficiently. However, we propose a simple Monte Carlo
approximation for handling moderately sized cases. Inspired by applications in
chemistry, we illustrate the proposed techniques on a real molecular property
prediction task in the small data regime.
Authors: Viacheslav Borovitskiy, Mohammad Reza Karimi, Vignesh Ram Somnath, Andreas Krause.