Papers made digestable
Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications.
Visual Teach and Repeat 3 (VT&R3), a generalization of stereo VT&R, achieves
long-term autonomous path-following using topometric mapping and localization
from a single rich sensor stream. In this paper, we improve the capabilities of
a LiDAR implementation of VT&R3 to reliably detect and avoid obstacles in
changing environments. Our architecture simplifies the obstacle-perception
problem to that of place-dependent change detection. We then extend the
behaviour of generic sample-based motion planners to better suit the
teach-and-repeat problem structure by introducing a new edge-cost metric paired
with a curvilinear planning space. The resulting planner generates naturally
smooth paths that avoid local obstacles while minimizing lateral path deviation
to best exploit prior terrain knowledge. While we use the method with VT&R, it
can be generalized to suit arbitrary path-following applications. Experimental
results from online run-time analysis, unit testing, and qualitative
experiments on a differential drive robot show the promise of the technique for
reliable long-term autonomous operation in complex unstructured environments.
Authors: Jordy Sehn, Yuchen Wu, Timothy D. Barfoot.
The statistical and design considerations that pertain to
dose optimization are discussed. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%.
The traditional more-is-better dose selection paradigm, developed based on
cytotoxic chemotherapeutics, is often problematic When applied to the
development of novel molecularly targeted agents (e.g., kinase inhibitors,
monoclonal antibodies, and antibody-drug conjugates). The US Food and Drug
Administration (FDA) initiated Project Optimus to reform the dose optimization
and dose selection paradigm in oncology drug development and call for more
attention to benefit-risk consideration.
We systematically investigated the operating characteristics of the seamless
phase 2-3 design as a strategy for dose optimization, where in stage 1
(corresponding to phase 2) patients are randomized to multiple doses, with or
without a control; and in stage 2 (corresponding to phase 3) the efficacy of
the selected optimal dose is evaluated with a randomized concurrent control or
historical control. Depending on whether the concurrent control is included and
the type of endpoints used in stages 1 and 2, we describe four types of
seamless phase 2-3 dose-optimization designs, which are suitable for different
clinical settings. The statistical and design considerations that pertain to
dose optimization are discussed. Simulation shows that dose optimization phase
2-3 designs are able to control the familywise type I error rates and yield
appropriate statistical power with substantially smaller sample size than the
conventional approach. The sample size savings range from 16.6% to 27.3%,
depending on the design and scenario, with a mean savings of 22.1%. Due to the
interim dose selection, the phase 2-3 dose-optimization design is logistically
and operationally more challenging, and should be carefully planned and
implemented to ensure trial integrity.
Authors: Liyun Jiang, Ying Yuan.
We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples.
We present the GPry algorithm for fast Bayesian inference of general
(non-Gaussian) posteriors with a moderate number of parameters. GPry does not
need any pre-training, special hardware such as GPUs, and is intended as a
drop-in replacement for traditional Monte Carlo methods for Bayesian inference.
Our algorithm is based on generating a Gaussian Process surrogate model of the
log-posterior, aided by a Support Vector Machine classifier that excludes
extreme or non-finite values. An active learning scheme allows us to reduce the
number of required posterior evaluations by two orders of magnitude compared to
traditional Monte Carlo inference. Our algorithm allows for parallel
evaluations of the posterior at optimal locations, further reducing wall-clock
times. We significantly improve performance using properties of the posterior
in our active learning scheme and for the definition of the GP prior. In
particular we account for the expected dynamical range of the posterior in
different dimensionalities. We test our model against a number of synthetic and
cosmological examples. GPry outperforms traditional Monte Carlo methods when
the evaluation time of the likelihood (or the calculation of theoretical
observables) is of the order of seconds; for evaluation times of over a minute
it can perform inference in days that would take months using traditional
methods. GPry is distributed as an open source Python package (pip install
gpry) and can also be found at https://github.com/jonaselgammal/GPry.
Authors: Jonas El Gammal, Nils Schöneberg, Jesús Torrado, Christian Fidler.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting. However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version.
We consider the fundamental scheduling problem of minimizing the sum of
weighted completion times on a single machine in the non-clairvoyant setting.
While no non-preemptive algorithm is constant competitive, Motwani, Phillips,
and Torng (SODA '93) proved that the simple preemptive round robin procedure is
$2$-competitive and that no better competitive ratio is possible, initiating a
long line of research focused on preemptive algorithms for generalized variants
of the problem. As an alternative model, Shmoys, Wein, and Williamson (FOCS
'91) introduced kill-and-restart schedules, where running jobs may be killed
and restarted from scratch later, and analyzed then for the makespan objective.
However, to the best of our knowledge, this concept has never been considered
for the total completion time objective in the non-clairvoyant model.
We contribute to both models: First we give for any $b > 1$ a tight analysis
for the natural $b$-scaling kill-and-restart strategy for scheduling jobs
without release dates, as well as for a randomized variant of it. This implies
a performance guarantee of $(1+3\sqrt{3})\approx 6.197$ for the deterministic
algorithm and of $\approx 3.032$ for the randomized version. Second, we show
that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is
$2$-competitive for jobs released in an online fashion over time, matching the
lower bound by Motwani et al. Using this result as well as the competitiveness
of round robin for multiple machines, we prove performance guarantees of
adaptions of the $b$-scaling algorithm to online release dates and unweighted
jobs on identical parallel machines.
Authors: Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Frozen pretrained models have become a viable alternative to the
pretraining-then-finetuning paradigm for transfer learning. However, with
frozen models there are relatively few parameters available for adapting to
downstream tasks, which is problematic in computer vision where tasks vary
significantly in input/output format and the type of information that is of
value. In this paper, we present a study of frozen pretrained models when
applied to diverse and representative computer vision tasks, including object
detection, semantic segmentation and video action recognition. From this
empirical analysis, our work answers the questions of what pretraining task
fits best with this frozen setting, how to make the frozen setting more
flexible to various downstream tasks, and the effect of larger model sizes. We
additionally examine the upper bound of performance using a giant frozen
pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches
competitive performance on a varied set of major benchmarks with only one
shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object
detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7
top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to
bring greater attention to this promising path of freezing pretrained image
models.
Authors: Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao.
The visual feedback is presented as a point cloud in real-time to the
operator. An
experiment shows the effectiveness of our solution.
In immersive humanoid robot teleoperation, there are three main shortcomings
that can alter the transparency of the visual feedback: the lag between the
motion of the operator's and robot's head due to network communication delays
or slow robot joint motion. This latency could cause a noticeable delay in the
visual feedback, which jeopardizes the embodiment quality, can cause dizziness,
and affects the interactivity resulting in operator frequent motion pauses for
the visual feedback to settle; (ii) the mismatch between the camera's and the
headset's field-of-views (FOV), the former having generally a lower FOV; and
(iii) a mismatch between human's and robot's range of motions of the neck, the
latter being also generally lower. In order to leverage these drawbacks, we
developed a decoupled viewpoint control solution for a humanoid platform which
allows visual feedback with low-latency and artificially increases the camera's
FOV range to match that of the operator's headset. Our novel solution uses SLAM
technology to enhance the visual feedback from a reconstructed mesh,
complementing the areas that are not covered by the visual feedback from the
robot. The visual feedback is presented as a point cloud in real-time to the
operator. As a result, the operator is fed with real-time vision from the
robot's head orientation by observing the pose of the point cloud. Balancing
this kind of awareness and immersion is important in virtual reality based
teleoperation, considering the safety and robustness of the control system. An
experiment shows the effectiveness of our solution.
Authors: Yang Chen, Leyuan Sun, Mehdi Benallegue, Rafael Cisneros, Rohan P. Singh, Kenji Kaneko, Arnaud Tanguy, Guillaume Caron, Kenji Suzuki, Abderrahmane Kheddar, Fumio Kanehiro.
The strong interaction dynamics is defined by an explicit dynamical unitary representation of the Poincar\'e group, where representations of space translations and rotations in the interacting and non-interacting representations are the same. The Argonne V18 potential is used to construct a relativistic nucleon-nucleon interaction reproducing the experimental deuteron binding energy and nucleon-nucleon scattering observables. Our formalism does not include the pion production channel and neglects two-body contributions in the electromagnetic as well as in the weak nuclear current operator.
We build a relativistic model to perform calculations of exclusive,
semi-exclusive and inclusive unpolarized cross sections and various
polarization observables in electron and neutrino scattering experiments with
deuteron targets. The strong interaction dynamics is defined by an explicit
dynamical unitary representation of the Poincar\'e group, where representations
of space translations and rotations in the interacting and non-interacting
representations are the same. The Argonne V18 potential is used to construct a
relativistic nucleon-nucleon interaction reproducing the experimental deuteron
binding energy and nucleon-nucleon scattering observables. Our formalism does
not include the pion production channel and neglects two-body contributions in
the electromagnetic as well as in the weak nuclear current operator. We show
that it is applicable to processes at kinematics, where the internal
two-nucleon energy remains below the pion production threshold but the
magnitude of the three-momentum transfer extends at least to several GeV.
Authors: A. Grassi, J. Golak, W. N. Polyzou, R. Skibiński, H. Witała, H. Kamada.
In this paper, we show how different types of distributed mutual algorithms
can be compared in terms of performance through simulations. A simulation
approach is presented, together with an overview of the relevant evaluation
metrics and statistical processing of the results.
In this paper, we show how different types of distributed mutual algorithms
can be compared in terms of performance through simulations. A simulation
approach is presented, together with an overview of the relevant evaluation
metrics and statistical processing of the results. The presented simulations
can be used to learn students of a course on distributed software the basics of
algorithms for distributed mutual exclusion, together with a detailed
comparison study. Finally, a related work section is provided with relevant
references which contain use cases where distributed mutual exclusion algorithm
can be beneficial.
Authors: Filip De Turck.
The metric tensor is built from here proposed symmetric positive semidefinite log-density gradient covariance (LGC) matrices. The LGCs measure the joint information content and dependence structure of both a random variable and the parameters of said variable. The proposed methodology is highly automatic and allows for exploitation of any sparsity associated with the model in question.
A metric tensor for Riemann manifold Monte Carlo particularly suited for
non-linear Bayesian hierarchical models is proposed. The metric tensor is built
from here proposed symmetric positive semidefinite log-density gradient
covariance (LGC) matrices. The LGCs measure the joint information content and
dependence structure of both a random variable and the parameters of said
variable. The proposed methodology is highly automatic and allows for
exploitation of any sparsity associated with the model in question. When
implemented in conjunction with a Riemann manifold variant of the recently
proposed numerical generalized randomized Hamiltonian Monte Carlo processes,
the proposed methodology is highly competitive, in particular for the more
challenging target distributions associated with Bayesian hierarchical models.
Authors: Tore Selland Kleppe.
Robotics is used to foster creativity. This situation applies to food
cooking. Robotic technology in the kitchen can speed up the process and reduce
its workload. This knowledge representation was created using videos of open-sourced recipes
Robotics is used to foster creativity. Humans can perform jobs in their
unique manner, depending on the circumstances. This situation applies to food
cooking. Robotic technology in the kitchen can speed up the process and reduce
its workload. However, the potential of robotics in the kitchen is still
unrealized. In this essay, the idea of FOON, a structural knowledge
representation built on insights from human manipulations, is introduced. To
reduce the failure rate and ensure that the task is effectively completed,
three different algorithms have been implemented where weighted values have
been assigned to the manipulations depending on the success rates of motion.
This knowledge representation was created using videos of open-sourced recipes
Authors: Sandeep Bondalapati.
Each of these contributes towards overestimating gravitational lensing and, when combined, these explain the amplitude and scale dependence of the lensing is low problem. We conclude that simplistic structure formation models are inadequate to interpret lensing and clustering together, and that it is crucial to employ more sophisticated models for the upcoming generation of large-scale surveys.
It is now well-established that $\Lambda$CDM predictions overestimate
gravitational lensing measurements around massive galaxies by about 30%, the
so-called lensing is low problem. Using a state-of-the-art hydrodynamical
simulation, we show that this discrepancy reflects shortcomings in standard
structure formation models rather than tensions within the $\Lambda$CDM
paradigm itself. Specifically, this problem results from ignoring a variety of
galaxy formation effects in simple models, including assembly bias, segregation
of satellite galaxies relative to dark matter, and baryonic effects on the
matter distribution. Each of these contributes towards overestimating
gravitational lensing and, when combined, these explain the amplitude and scale
dependence of the lensing is low problem. We conclude that simplistic structure
formation models are inadequate to interpret lensing and clustering together,
and that it is crucial to employ more sophisticated models for the upcoming
generation of large-scale surveys.
Authors: Jonas Chaves-Montero, Raul E. Angulo, Sergio Contreras.
We obtain a norm estimate for this operator. As an
application, we prove that the Brennan's conjecture is true for a large class
of quasi-disks.
In this paper we study a multiplier operator which is induced by the
Schwarzian derivative of a univalent function with a qc extension to the
extended complex plane. We obtain a norm estimate for this operator. As an
application, we prove that the Brennan's conjecture is true for a large class
of quasi-disks. We also present a new characterization of the asymptotically
conformal curves and the WP curves in terms of the multiplier operator.
Authors: Jianjun Jin.
In real systems diagonal and off-diagonal disorder may be interconnected. However, long range potential fluctuation are quite common in real systems. Thermal treatment is used as a means of fine tuning the system disorder. The homogeneity of the system was monitored using inelastic light-scattering. This is based on collecting the Raman signal from micron-size spots across the sample. The analysis establishes that heterogeneity and disorder are correlated.
Disorder and homogeneity are two concepts that refer to spatial variation of
the system potential. In condensed-matter systems disorder is typically divided
into two types; those with local parameters varying from site to site (diagonal
disorder) and those characterized by random transfer-integral values
(off-diagonal disorder). Amorphous systems in particular exhibit off-diagonal
disorder due to random positions of their constituents. In real systems
diagonal and off-diagonal disorder may be interconnected. The formal depiction
of disorder as local deviations from a common value focuses attention on the
short-range components of the potential-landscape. However, long range
potential fluctuation are quite common in real systems. In this work we seek to
find a correlation between disorder and homogeneity using amorphous
indium-oxide films with different carrier-concentrations and with different
degree of disorder. Thermal treatment is used as a means of fine tuning the
system disorder. In this process the resistance of the sample decreases while
its amorphous structure and chemical composition is preserved. The reduced
resistivity affects the Ioffe-Regel parameter that is taken as a relative
measure of disorder in a given sample. The homogeneity of the system was
monitored using inelastic light-scattering. This is based on collecting the
Raman signal from micron-size spots across the sample. The statistics of these
low-energy data are compared with the sample disorder independently estimated
from transport measurements. The analysis establishes that heterogeneity and
disorder are correlated.
Authors: Z. Ovadyahu.
A lower limit to the electron number densities
of plasmoids - SPs (FPs) were obtained that ranged between
3.4-6.1$\times$10$^{8}$ (3.3-5.9$\times$10$^{8}$) cm$^{-3}$ whereas for the
spire, it ranged from 2.6-3.2$\times$10$^{8}$ cm$^{-3}$. This suggests that
the blobs are plasmoids induced by a tearing-mode instability.
We have carried out a comprehensive study of the temperature structure of
plasmoids, which successively occurred in recurrent active region jets. The
multithermal plasmoids were seen to be travelling along the multi-threaded
spire as well as at the footpoint region in the EUV/UV images recorded by the
Atmospheric Imaging Assembly (AIA). The Differential Emission Measure (DEM)
analysis was performed using EUV AIA images, and the high-temperature part of
the DEM was constrained by combining X-ray images from the X-ray telescope
(XRT/Hinode). We observed a systematic rise and fall in brightness, electron
number densities and the peak temperatures of the spire plasmoid during its
propagation along the jet. The plasmoids at the footpoint (FPs) (1.0-2.5 MK)
and plasmoids at the spire (SPs) (1.0-2.24 MK) were found to have similar peak
temperatures, whereas the FPs have higher DEM weighted temperatures (2.2-5.7
MK) than the SPs (1.3-3.0 MK). A lower limit to the electron number densities
of plasmoids - SPs (FPs) were obtained that ranged between
3.4-6.1$\times$10$^{8}$ (3.3-5.9$\times$10$^{8}$) cm$^{-3}$ whereas for the
spire, it ranged from 2.6-3.2$\times$10$^{8}$ cm$^{-3}$. Our analysis shows
that the emission of these plasmoids starts close to the base of the jet(s),
where we believe that a strong current interface is formed. This suggests that
the blobs are plasmoids induced by a tearing-mode instability.
Authors: Sargam M. Mulay, Durgesh Tripathi, Helen Mason, Giulio Del Zanna, Vasilis Archontis.
The expansion law proposed by Padmanabhan suggests that the evolution of the volume of the horizon is due to the difference between the degrees of freedom on the horizon and the degrees of freedom in the bulk enclosed by the horizon. Contrary to the conventional approach, we expressed degrees of freedom of the horizon in terms of the surface energy of the horizon. Also, we have expressed modified expansion law in terms of cosmic components.
The expansion law proposed by Padmanabhan suggests that the evolution of the
volume of the horizon is due to the difference between the degrees of freedom
on the horizon and the degrees of freedom in the bulk enclosed by the horizon.
In formulating this law, Padmanabhan used the temperature, $T=H/2\pi$, for a
dynamical expansion. In this work, we modified the expansion law using
Kodama-Hayward temperature, the dynamical temperature, for the horizon, first
in (3+1) Einstein's gravity and extended it to higher order gravity theories
such as (n+1) Einstein gravity, Gauss-Bonnet gravity, and more general Lovelock
gravity. Contrary to the conventional approach, we expressed degrees of freedom
of the horizon in terms of the surface energy of the horizon. Also, we have
expressed modified expansion law in terms of cosmic components. It then turns
out that it is possible to express the modified expansion law in a form as if
$T=H/2\pi$ is the temperature of the dynamical horizon.
Authors: Muhsinath. M, Hassan Basari V. T., Titus K. Mathew.