Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
Impact craters are formed due to continuous impacts on the surface of
planetary bodies. Extracting precise shapes of the craters can be
helpful for many advanced analyses, such as crater formation.
Impact craters are formed due to continuous impacts on the surface of
planetary bodies. Most recent deep learning-based crater detection methods
treat craters as circular shapes, and less attention is paid to extracting the
exact shapes of craters. Extracting precise shapes of the craters can be
helpful for many advanced analyses, such as crater formation. This paper
proposes a combination of unsupervised non-deep learning and semi-supervised
deep learning approach to accurately extract shapes of the craters and detect
missing craters from the existing catalog. In unsupervised non-deep learning,
we have proposed an adaptive rim extraction algorithm to extract craters'
shapes. In this adaptive rim extraction algorithm, we utilized the elevation
profiles of DEMs and applied morphological operation on DEM-derived slopes to
extract craters' shapes. The extracted shapes of the craters are used in
semi-supervised deep learning to get the locations, size, and refined shapes.
Further, the extracted shapes of the craters are utilized to improve the
estimate of the craters' diameter, depth, and other morphological factors. The
craters' shape, estimated diameter, and depth with other morphological factors
will be publicly available.
Authors: Atal Tewari, Vikrant Jain, Nitin Khanna.
We also provide approximation results for both deterministic and random discretizations. We illustrate the analytic findings through an extensive set of numerical simulations.
We focus on an epidemiological model (the archetypical SIR system) defined on
graphs and study the asymptotic behavior of the solutions as the number of
vertices in the graph diverges. By relying on the theory of so called graphons
we provide a characterization of the limit and establish convergence results.
We also provide approximation results for both deterministic and random
discretizations. We illustrate the analytic findings through an extensive set
of numerical simulations.
Authors: Blanca Ayuso de Dios, Simone Dovetta, Laura V. Spinolo.
Stars are born in dense molecular filaments irrespective of their mass. Compression of the ISM by shocks cause filament formation in molecular clouds. Filaments formed behind the shock expand
after the duration time for short shock duration models, whereas long duration
models lead to star formation by forming massive supercritical filaments.
Stars are born in dense molecular filaments irrespective of their mass.
Compression of the ISM by shocks cause filament formation in molecular clouds.
Observations show that a massive star cluster formation occurs where the peak
of gas column density in a cloud exceeds 10^23 cm^-2. In this study, we
investigate the effect of the shock-compressed layer duration on filament/star
formation and how the initial conditions of massive star formation are realized
by performing three-dimensional (3D) isothermal magnetohydrodynamics (MHD)
simulations with {gas inflow duration from the boundaries (i.e., shock wave
duration)} as a controlling parameter. Filaments formed behind the shock expand
after the duration time for short shock duration models, whereas long duration
models lead to star formation by forming massive supercritical filaments.
Moreover, when the shock duration is longer than two postshock free-fall times,
the peak column density of the compressed layer exceeds 10^23 cm^-2, and {the
gravitational collapse of the layer causes that} the number of OB stars
expected to be formed in the shock-compressed layer reaches the order of ten
(i.e., massive cluster formation).
Authors: Daisei Abe, Tsuyoshi Inoue, Rei Enokiya, Yasuo Fukui.
Editing and retouching facial attributes is a complex task that usually requires human artists to obtain photo-realistic results. Its applications are numerous and can be found in several contexts such as cosmetics or digital media retouching, to name a few. Recently, advancements in conditional generative modeling have shown astonishing results at modifying facial attributes in a realistic manner. Then, an inpainting module is used to remove the detected wrinkles, filling them in with a texture that is statistically consistent with the surrounding skin.
Editing and retouching facial attributes is a complex task that usually
requires human artists to obtain photo-realistic results. Its applications are
numerous and can be found in several contexts such as cosmetics or digital
media retouching, to name a few. Recently, advancements in conditional
generative modeling have shown astonishing results at modifying facial
attributes in a realistic manner. However, current methods are still prone to
artifacts, and focus on modifying global attributes like age and gender, or
local mid-sized attributes like glasses or moustaches. In this work, we revisit
a two-stage approach for retouching facial wrinkles and obtain results with
unprecedented realism. First, a state of the art wrinkle segmentation network
is used to detect the wrinkles within the facial region. Then, an inpainting
module is used to remove the detected wrinkles, filling them in with a texture
that is statistically consistent with the surrounding skin. To achieve this, we
introduce a novel loss term that reuses the wrinkle segmentation network to
penalize those regions that still contain wrinkles after the inpainting. We
evaluate our method qualitatively and quantitatively, showing state of the art
results for the task of wrinkle removal. Moreover, we introduce the first
high-resolution dataset, named FFHQ-Wrinkles, to evaluate wrinkle detection
methods.
Authors: Marcelo Sanchez, Gil Triginer, Coloma Ballester, Lara Raad, Eduard Ramon.
The steady state is not necessarily unique. Using a coupled setup of MITgcm, we test
two techniques with complementary advantages. The first is based on the
introduction of random fluctuations in the forcing and permits to explore a
wide part of phase space. The second reconstructs the stable branches and is
more precise in finding the position of tipping points.
The climate is a complex non-equilibrium dynamical system that relaxes toward
a steady state under the continuous input of solar radiation and dissipative
mechanisms. The steady state is not necessarily unique. A useful tool to
describe the possible steady states under different forcing is the bifurcation
diagram, that reveals the regions of multi-stability, the position of tipping
points, and the range of stability of each steady state. However, its
construction is highly time consuming in climate models with a dynamical deep
ocean, interactive ice sheets or carbon cycle, where the relaxation time
becomes larger than thousand years. Using a coupled setup of MITgcm, we test
two techniques with complementary advantages. The first is based on the
introduction of random fluctuations in the forcing and permits to explore a
wide part of phase space. The second reconstructs the stable branches and is
more precise in finding the position of tipping points.
Authors: Maura Brunetti, Charline Ragon.
An experimental approach based on an MPI (Message Passing Interface) implementation is presented, with the goal to characterize the relevant evaluation metrics based on statistical processing of the results. Finally, use cases where distributed coordinator election algorithms are useful are presented.
In this paper, we detail how two types of distributed coordinator election
algorithms can be compared in terms of performance based on an evaluation on
the High Performance Computing (HPC) infrastructure. An experimental approach
based on an MPI (Message Passing Interface) implementation is presented, with
the goal to characterize the relevant evaluation metrics based on statistical
processing of the results. The presented approach can be used to learn master
students of a course on distributed software the basics of algorithms for
coordinator election, and how to conduct an experimental performance evaluation
study. Finally, use cases where distributed coordinator election algorithms are
useful are presented.
Authors: Filip De Turck.
In compact stars, that
is white dwarfs and neutron stars, solid components are also present. Neutron
stars have a solid crust which is the strongest material known in nature. The second half is on the simulation of crustal oscillations in the
fundamental toroidal mode. Here, we dedicate a large fraction of the paper to
approaches which can suppress numerical noise in the solid. If not minimized,
the latter can dominate the crustal motion in the simulations.
Smoothed Particle Hydrodynamics (SPH) is a frequently applied tool in
computational astrophysics to solve the fluid dynamics equations governing the
systems under study. For some problems, for example when involving asteroids
and asteroid impacts, the additional inclusion of material strength is
necessary in order to accurately describe the dynamics. In compact stars, that
is white dwarfs and neutron stars, solid components are also present. Neutron
stars have a solid crust which is the strongest material known in nature.
However, their dynamical evolution, when modeled via SPH or other computational
fluid dynamics codes, is usually described as a purely fluid dynamics problem.
Here, we present the first 3D simulations of neutron-star crustal toroidal
oscillations including material strength with the Los Alamos National
Laboratory SPH code FleCSPH. In the first half of the paper, we present the
numerical implementation of solid material modeling together with standard
tests. The second half is on the simulation of crustal oscillations in the
fundamental toroidal mode. Here, we dedicate a large fraction of the paper to
approaches which can suppress numerical noise in the solid. If not minimized,
the latter can dominate the crustal motion in the simulations.
Authors: Irina Sagert, Oleg Korobkin, Ingo Tews, Bing-Jyun Tsao, Hyun Lim, Michael J. Falato, Julien Loiseau.
We investigate pattern transformations of periodic hydrogel systems that are triggered by swelling-induced structural instabilities. The types of microstructures considered in the present work include single-phase and two-phase voided hydrogel structures as well as reinforced hydrogel thin films.
We investigate pattern transformations of periodic hydrogel systems that are
triggered by swelling-induced structural instabilities. The types of
microstructures considered in the present work include single-phase and
two-phase voided hydrogel structures as well as reinforced hydrogel thin films.
While the observed transformations of the single-phase structures show good
agreement with experimental findings, the two-phase materials provide novel
patterns associated with wrinkling of internal surfaces. Furthermore, an
extensive parametric study on the reinforced hydrogel thin films reveals new
opportunities for the design of complex out-of-plane surface modes caused by
swelling-induced instabilities. Next to the mentioned buckling-type
instabilities, we encountered the development of micro-creases at the internal
surfaces of periodic media before the loss of strong ellipticity of effective
moduli.
Authors: Elten Polukhov, Laura Pytel, Marc-Andre Keip.
Quantum measurement is important to quantum computing as it extracts the
outcome of the circuit at the end of the computation. Otherwise, it will
incur significant errors. But it is not the case now. We evaluated our method on a representative set of essential
applications.
Quantum measurement is important to quantum computing as it extracts the
outcome of the circuit at the end of the computation. Previously, all
measurements have to be done at the end of the circuit. Otherwise, it will
incur significant errors. But it is not the case now. Recently IBM started
supporting dynamic circuits through hardware (instead of software by
simulator). With mid-circuit hardware measurement, we can improve circuit
efficacy and fidelity from three aspects: (a) reduced qubit usage, (b) reduced
swap insertion, and (c) improved fidelity. We demonstrate this using real-world
applications Bernstein Verizani on real hardware and show that circuit resource
usage can be improved by 60\%, and circuit fidelity can be improved by 15\%. We
design a compiler-assisted tool that can find and exploit the tradeoff between
qubit reuse, fidelity, gate count, and circuit duration. We also developed a
method for identifying whether qubit reuse will be beneficial for a given
application. We evaluated our method on a representative set of essential
applications. We can reduce resource usage by up to 80\% and circuit fidelity
by up to 20\%.
Authors: Fei Hua, Yuwei Jin, Yanhao Chen, John Lapeyre, Ali Javadi-Abhari, Eddy Z. Zhang.
Software-defined Networking and Network Function Virtualization started separately to find their convolution into 5G network architecture. This novel work is a helpful means for the design and standardization process of the futureB5G and 6G network architecture.
The advent of 5G and the design of its architecture has become possible
because of the previous individual scientific works and standardization efforts
on cloud computing and network softwarization. Software-defined Networking and
Network Function Virtualization started separately to find their convolution
into 5G network architecture. Then, the ongoing design of the future beyond 5G
(B5G) and 6G network architecture cannot overlook the pivotal inputs of
different independent standardization efforts about autonomic networking,
service-based communication systems, and multi-access edge computing. This
article provides the design and the characteristics of an agent-based,
softwarized, and intelligent architecture, which coherently condenses and
merges the independent proposed architectural works by different
standardization working groups and bodies. This novel work is a helpful means
for the design and standardization process of the futureB5G and 6G network
architecture.
Authors: Sisay Tadesse Arzo, Domenico Scotece, Riccardo Bassoli, Fabrizio Granelli, Luca Foschini, Frank H. P. Fitzek.