A research grant is available at the University of Salento, Lecce, Italy.
The research shall be developed in close collaboration with Echolight
(https://www.echolightmedical.com, 12 months out of 18 in total), and is
related to the following topic:
The main goal of the project is to improve the tuning and calibration
process of noninvasive diagnostic imaging devices used for imaging. One
of the most critical steps during the implementation of a diagnostic
imaging device is its calibration. In fact, poor calibration can lead to
unreliable instrument performance with noisy images and the presence of
unwanted artefacts that could mislead the diagnosis made by the
physician. The calibration phase involves a repeated try-and-check
procedure during which the instrument parameters are repeatedly changed
in order to obtain images that are sharp and as closely matched as
possible to the target reference. This phase often requires considerable
time expenditure and expert supervision; moreover, if one considers that
calibration is carried out both following the production of the
diagnostic instrument but also after several months of its use in the
operational context, it is easy to deduce that automating this process
on the one hand would improve the diagnostic yield, and on the other
hand would reduce downtime and recalibration. The project aims to
improve and automate the calibration process by introducing machine
learning techniques for image classification. The results of the project
find application on all instruments used for imaging, whether they are
based on MRI, computed tomography, X-ray or ultrasound techniques. In
fact, the goal is to relate the configuration parameters of the
instrument to the images it produces in order to eliminate noise and
artefacts produced by misconfiguration. Despite this, in the project we
will consider as a case study the images produced by an ultrasound-based
device produced by Echolight S.p.A. Medical devices produced by
Echolight S.p.A. exploit images derived from ultrasound scans (B-Mode)
to automatically identify anatomical reference targets (lumbar vertebrae
bone interfaces of the L1-L4 tract and proximal femur bone interface).
Once the regions of interest (ROIs) are identified, a proprietary
algorithm evaluates the spectral characteristics of selected portions of
the raw ultrasound signal related to the analyzed bone tissues. From the
analysis of the raw signal characteristics, a measure of the bone
mineral density (BMD) of the analyzed anatomical sites is determined. In
order to provide reliable, repeatable, and accurate BMD measurements,
special calibration and testing procedures have been developed, however,
which require several manual measurements and checks, resulting in a
high human-time commitment and, consequently, introducing a risk of
human error on the collection and interpretation of the collected
measurements and results. Leveraging the image processing and image
classification techniques developed within the project, the algorithm
will provide output indicative of the presence of artefacts or other
alterations in the performance of the ultrasound system in production in
order to possibly intervene with further modifications and calibrations.
As part of the project, standard conditions for conducting tests will
also be defined through the use of specific ultrasound phantoms provided
by the company.
Prof. Italo Epicoco (italo.epicoco(a)unisalento.it) is the scientific
responsible for this research grant.
DEADLINE: June 24, 2022
ALL INCLUSIVE GROSS AMOUNT (for 18 months): 29050,50 euro (i.e., 19367
euro annual gross amount)
NOTE: Foreign candidates are strongly encouraged to contact me by email
if they need help/support in order to prepare their application: I will
be glad to assist.
Here are, attached, an unofficial English translation of the call and
the corresponding application and self declaration forms, translated in
English.
NOTE: Foreign candidates are strongly encouraged to contact Prof.
Epicoco by email if they need help/support in order to prepare their
application: hewill be glad to assist.
*******************************************************************************************
Prof. Massimo Cafaro, Ph.D.
Associate Professor of Parallel Algorithms and Data Mining/Machine Learning
Head of HPC Lab https://hpc-lab.unisalento.it
Director of Master in Applied Data Science
Department of Engineering for Innovation
University of Salento, Lecce, Italy
Via per Monteroni
73100 Lecce, Italy
Voice/Fax +39 0832 297371
Web https://www.massimocafaro.it
Web https://www.unisalento.it/people/massimo.cafaro
E-mail massimo.cafaro(a)unisalento.it
E-mail cafaro(a)ieee.org
E-mail cafaro(a)acm.org
INGV
National Institute of Geophysics and Volcanology
Via di Vigna Murata 605
Roma
CMCC Foundation
Euro-Mediterranean Center on Climate Change
Via Augusto Imperatore, 16 - 73100 Lecce
massimo.cafaro(a)cmcc.it
*******************************************************************************************
--
The first research grant shall be developed in close collaboration with
Planetek Italia (12 months out of 18 in total), and is related to the
following topic:
Machine Learning for Space Weather
The proposed research project is concerned with the study of "Space
Weather Phenomena" and the development of knowledge about the mechanisms
and effects of solar-derived perturbative phenomena developing in
circumterrestrial space and impacting the ionized atmosphere
(ionosphere). In the project emphasis is given to the study and modeling
of the dynamics of the ionospheric plasma and the electron density
irregularities in it on a global scale, in order to improve the
capability of long-term (24-48 hours in advance) nowcasting and
forecasting of the ionospheric response to Space Weather events over the
Mediterranean area. The modeling approach is developed through
innovative "machine learning" techniques, recently introduced (Cesaroni
et al 2020), the results of which point to this as a strategy to extend
the time horizon of ionospheric forecasting, a fundamental requirement
for increasing knowledge of Space Weather phenomena in near-Earth space.
In addition, the growing demand for semi-empirical approaches for
real-time mitigation of errors introduced by the ionosphere on
positioning and navigation systems makes the proposed topic a
significant contribution in the area of "services and research for
society" in relation to the strategic objective "Development of a
National Space Weather Service" in the context of developing
countermeasures to contain the negative effect that the irregular and
disturbed ionosphere can have on technological systems in use in modern
society such as, for example, navigation and positioning satellite
systems (GNSS, GLobal Navigation Satellite Systems), trans-horizon HF
radio communications, and L-band satellite communication systems. Such
systems are of interest to a variety of end users who can be identified
as users of the service in which the developed products may be embedded.
Examples of users may include: precision agriculture operators,
operators in the field of mapping, aviation, and radio communications
operators for emergency management in civil defense.
Cesaroni, C., Spogli, L., Aragon-Angel, A., Fiocca, M., Dear, V., De
Franceschi, G., & Romano, V. (2020). Neural network based model for
global Total Electron Content forecasting. Journal of Space Weather and
Space Climate, 10, 11.
The second research grant shall be developed in close collaboration with
GE Avio (12 months out of 18 in total), and is related too the following
topic:
Operative Framework For HPC (Off-HPC)
High-performance computing (HPC) clouds are becoming a complement or, in
some cases, an alternative to on-premise clusters for running
scientific-technical, engineering, and business analytics service
applications. Most research efforts in the area of cloud HPC aim to
analyze and understand the cost-benefit of migrating computationally
intensive applications from on-premise environments to public cloud
platforms. Industry trends show that on-premise/cloud hybrid
environments are the natural path to get the best out of on-premise and
cloud resources. Workloads that are stable from the point of view of
required computing resources and sensitive from the point of view of the
need to protect processed information can be performed on on-premise
resources, while peak computational loads can take advantage of remote
computing resources available in the cloud typically under a
"pay-as-you-go" consumption mode. The main difficulties in using cloud
solutions to run HPC applications stem from their characteristics and
properties compared to traditional cloud services to handle, for
example, standard enterprise applications, Web applications, data
storage or backup, or business intelligence. HPC applications tend to
require more computing power than application services typically
delivered in cloud environments. These processing requirements arise not
only from the characteristics of the CPUs (Central Processing Units),
but also from the amount of memory and network speed to support their
proper execution. In addition, such applications may have a particular
and different execution mechanism than dedicated cloud application
services that instead run 24/7. HPC applications tend to run in batch
mode. Users execute a series of computational jobs, consisting of
instances of the application with different inputs, and wait until
results are generated to decide whether new computational tasks need to
be submitted and executed. Therefore, moving HPC applications to cloud
platforms requires not only a focus on resource allocation in the
infrastructure in use and its optimization, but also on how users
interact with this new environment. Research in the area of cloud HPC
can be classified into three broad categories: (i) feasibility studies
on adopting the cloud to replace or complement on-premise computing
clusters to run HPC applications; (ii) performance optimization of cloud
resources for running HPC applications; and (iii) services to simplify
the use of cloud HPC, particularly for users who are not specialized in
data and information processing and processing technologies. This
research project intends to focus on study activities within the first
category, in which, more specifically, there are four main aspects that
should be considered: (i) metrics used to assess how feasible the use of
HPC cloud is; (ii) resources used in computational experiments; (iii)
computational infrastructure; and (iv) software, which includes both
well-known HPC benchmarks and computational tools, algorithms, or
methodologies related to specific business application cases. Currently,
the company uses HPC applications running mostly on on-premise systems
but faces issues related to the need for greater computational resources
that can be met through flexible and scalable architectures provided by
cloud technologies. The need is to build clear technology and governance
references for cloud or hybrid infrastructures. The research project
will therefore aim to carefully analyze the state of the art of hybrid
HPC solutions, define criteria for benchmarking different solutions,
develop an operational framework that includes the operational and
economic management aspects of a hybrid HPC solution, and finally
implement one or more industrial pilots.
DEADLINE: June 24, 2022
ALL INCLUSIVE GROSS AMOUNT (for 18 months): 29050,50 euro (i.e., 19367
euro annual gross amount)
NOTE: Foreign candidates are strongly encouraged to contact me by email
if they need help/support in order to prepare their application: I will
be glad to assist.
Here you can download an unofficial English translation of the call:
RIPARTI-call
*******************************************************************************************
Prof. Massimo Cafaro, Ph.D.
Associate Professor of Parallel Algorithms and Data Mining/Machine Learning
Head of HPC Lab https://hpc-lab.unisalento.it
Director of Master in Applied Data Science
Department of Engineering for Innovation
University of Salento, Lecce, Italy
Via per Monteroni
73100 Lecce, Italy
Voice/Fax +39 0832 297371
Web http://sara.unisalento.it/~cafaro
Web https://www.unisalento.it/people/massimo.cafaro
E-mail massimo.cafaro(a)unisalento.it
E-mail cafaro(a)ieee.org
E-mail cafaro(a)acm.org
INGV
National Institute of Geophysics and Volcanology
Via di Vigna Murata 605
Roma
CMCC Foundation
Euro-Mediterranean Center on Climate Change
Via Augusto Imperatore, 16 - 73100 Lecce
massimo.cafaro(a)cmcc.it
*******************************************************************************************
--
Dear all,
we are looking for bright and highly motivated student for one PhD position
at the Department of Information Engineering at the University of Pisa.
The position is funded within the framework of the "Crosslab: Innovation
for Industry 4.0" project.
The research activities will be carried out in the "Cloud Computing, Big
Data & Cybersecurity" laboratory (
https://crosslab.dii.unipi.it/cloud-computing-big-data-cybersecurity-lab).
A short description of the research topic can be found below.
Interested people are requested to send an expression of interest by
submitting a curriculum vitae, a one-page research statement showing
motivation and understanding of the topic of the position, and the official
Transcript of Record. The expression of interest must be sent by email to
Carlo Vallati at carlo.vallati(a)unipi.it with the reference [PhD expression
of interest] in the subject of the email. Applications will be reviewed
continuously until 5th July 2022.
The starting date of the PhD position is Fall 2022. The duration of the PhD
is three years. The compensation is a standard Italian Ph.D. student fare,
about 1150 Euro/month net.
================================================
Edge Computing 2.0: Efficient Deep Learning at the Edge
================================================
Abstract: Deep neural networks (DNNs) have achieved unprecedented success
in the field of artificial intelligence (AI), including computer vision,
natural language processing, and speech recognition. However, their
superior performance comes at the considerable cost of computational
complexity, which greatly hinders their applications in many
resource-constrained devices, such as Edge computing nodes and Internet of
Things (IoT) devices. Therefore, methods and techniques that can lift the
efficiency bottleneck while preserving the high accuracy of DNNs are in
great demand to enable numerous edge AI applications.
The proposed research plan involves the analysis and identification of the
challenges related to DNNs for time series prediction both at training
time, on the GPU-enabled resource-constrained devices, and at inference
time, on microcontrollers, leveraging available open-source software such
as Tensorflow and Pytorch. The final goal of the research activity will be
the definition, design, implementation, and testing of novel algorithms to
improve the efficiency of DNNs on the edge and on
IoT devices, on real-case scenarios.
Reference contact: Carlo Vallati, email: carlo.vallati(a)unipi.it
--
--
------------------------------
-------
Carlo Vallati, PhD
Associate Professor
Computer Networking Group
Department of Information Engineering
University of Pisa
Via Diotisalvi 2, 56122 Pisa - Italy
Ph. : (+39) 050-2217.572 (direct) .599 (switch)
Fax : (+39) 050-2217.600
Skype: warner83
E-mail: carlo.vallati@iet.unipi.ithttp://www.iet.unipi.it/c.vallati/
-----------------------------------------------