(Our apologies if you received multiple copies of this CFP)
==============================================
Future Generation Computer Systems
Special Issue on Scalable Compute Continuum
The full call for papers is available on the official FGCS website:
https://www.sciencedirect.com/journal/future-generation-computer-systems/ab…
==============================================
Motivation and Scope
=======================
The “Compute Continuum” paradigm promises to manage the heterogeneity and
dynamism of widespread computing resources, aiming to simplify the
execution of distributed applications improving data locality, performance,
availability, adaptability, energy management as well as other
non-functional features. This is made possible by overcoming the
fragmentation of IoT-edge-cloud resources and their segregation in tiers,
enabling applications to be seamlessly executed and relocated along a
continuum of resources spanning from the edge to the cloud.
By distributing resources all around, the emerging Compute Continuum
paradigm supports the execution of data-intensive applications as close as
possible to data sources and end users. Besides consolidated vertical and
horizontal scaling patterns, this paradigm also offers more detailed
adaptation actions that strictly depend on the specific infrastructure
components (e.g., to reduce energy consumption, or to exploit specific
hardware such as GPUs and FPGAs). This enables the enhancement of
latency-sensitive applications, the reduction of network bandwidth
consumption, the improvement of privacy protection, and the development of
novel services aimed at improving living, health, safety, and mobility. All
of this should be achievable by application developers without having to
worry about how and where the developed application components will be
executed. Therefore, to unleash the true potential offered by the Compute
Continuum, autonomous, proactive, and infrastructure-aware management is
desirable, if not mandatory, calling for novel interdisciplinary approaches
that exploit optimization theory, control theory, machine learning, and
artificial intelligence methods.
This special issue aims to investigate and gather research contributions on
the emerging Compute Continuum, seeking solutions for running distributed
applications while efficiently managing heterogeneous and widespread
computing resources.
Topics of interest include, but are not limited to, the following:
- Scalable architectures and systems for the Compute Continuum;
- System software for cloud-edge-IoT orchestration;
- Distributed and decentralized management of resources and application
deployment in the - Compute Continuum;
- Programming models, languages and patterns for the Compute Continuum;
- Compute Continuum performance modeling and analysis;
- Compute Continuum as a service;
- Energy-efficient solutions for sustainable Compute Continuum;
- AI in the Compute Continuum;
- Scalable applications for Compute Continuum (IoT, microservices,
serverless);
- Data-intensive and stream processing systems and applications in the
Compute Continuum;
- Digital Twins and industry applications in the Compute Continuum;
- Prototypes and real-life experiments involving the Compute Continuum;
- Benchmarks and experimental platforms for reproducible experiments in the
Compute Continuum.
Guest Editors
=======================
Valeria Cardellini, University of Rome Tor Vergata, Italy.
Patrizio Dazzi, University of Pisa, Italy.
Gabriele Mencagli, University of Pisa, Italy.
Matteo Nardelli, Bank of Italy, Italy.
Massimo Torquati, University of Pisa, Italy.
Important Dates
=======================
Submission portal opens: May 1, 2023
Deadline for paper submission: November 3, 2023
Latest acceptance deadline for all papers: March 8, 2024
Manuscript Submission Instructions
=======================
The FGCS’s submission system (
https://www.editorialmanager.com/FGCS/default.aspx) will be open for
submissions to our Special Issue from May 1, 2023. When submitting your
manuscript please select the article type VSI: SI_SCC_ScalCompContinuum.
All submissions deemed suitable by the editors to be sent for peer review
will be reviewed by at least two independent reviewers. Once your
manuscript is accepted, it will go into production to be published in the
special issue.
Looking forward to receiving your excellent submissions soon.
Best regards,
Valeria Cardellini, Patrizio Dazzi, Gabriele Mencagli, Matteo Nardelli, and
Massimo Torquati
Call for Papers PAW-ATM 2023:
Parallel Applications Workshop, Alternatives To MPI+X
https://sourceryinstitute.github.io/PAW/
Held in conjunction with SC23, Denver, CO
Submissions deadline: July 24, 2023
Notification to authors: August 31, 2023
Workshop date: November 13, 2023
Summary
As supercomputers become more and more powerful, the number and diversity of applications that can be tackled with these machines grows. Unfortunately, the architectural complexity of these supercomputers grows as well, with heterogeneous processors, multiple levels of memory hierarchy, and many ways to move data and synchronize between processors. The MPI+X programming model, use of which is considered by many to be standard practice, demands that a programmer be expert in both the application domain and the low-level details of the architecture(s) on which that application will be deployed, and the availability of such superhuman programmers is a critical bottleneck. Things become more complicated when evolution and change in the underlying architecture translates into significant re-engineering of the MPI+X code to maintain performance.
Numerous alternatives to the MPI+X model exist, and by raising the level of abstraction on the application domain and/or the target architecture, they offer the ability for "mere mortal" programmers to take advantage of the supercomputing resources that are available to advance science and tackle urgent real-world problems. However, compared to the MPI+X approach, these alternatives generally lack two things. First, they aren't as well known as MPI+X and a domain scientist may simply not be aware of models that are a good fit to their domain. Second, they are less mature than MPI+X and likely have more functionality or performance "potholes" that need only be identified to be addressed.
PAW-ATM is a forum for discussing HPC applications written in alternatives to MPI+X. Its goal is to bring together application experts and proponents of high-level languages to present concrete example uses of such alternatives, describing their benefits and challenges.
Scope and Aims
The PAW-ATM workshop is designed to be a forum for discussion of supercomputing-scale parallel applications and their implementation in programming models outside of the dominant MPI+X paradigm. Papers and talks will explore the benefits (or perhaps drawbacks) of implementing specific applications with alternatives to MPI+X, whether those benefits are in performance, scalability, productivity, or some other metric important to that application domain. Presenters are encouraged to generalize the experience with their application to other domains in science and engineering and to bring up specific areas of improvement for the model(s) used in the implementation.
In doing so, our hope is to create a setting in which application authors, language designers, and architects can present and discuss the state of the art in alternative scalable programming models, while also wrestling with how to increase their effectiveness and adoption. Beyond well-established HPC scientific simulations, we also encourage submissions exploring artificial intelligence, big data analytics, machine learning, and other emerging application areas.
Topics of interest include, but are not limited to:
* Novel application development using high-level parallel programming languages and frameworks.
* Examples that demonstrate performance, compiler optimization, error checking, and reduced software complexity.
* Applications from artificial intelligence, data analytics, bioinformatics, and other novel areas.
* Performance evaluation of applications developed using alternatives to MPI+X and comparisons to standard programming models.
* Novel algorithms enabled by high-level parallel abstractions.
* Experience with the use of new compilers and runtime environments.
* Libraries using or supporting alternatives to MPI+X.
* Benefits of hardware abstraction and data locality on algorithm implementation.
Papers that include description of applications that demonstrate the use of alternative programming models will be given higher priority.
Submissions
Submissions are solicited in 2 categories:
1. Full-length papers presenting novel research results:
Full-length papers will be published in the workshop proceedings. Submitted papers must describe original work that has not appeared in, nor is under consideration for, another conference or journal. Papers shall be eight (8) pages minimum and not exceed ten (10) pages including text, appendices, and figures, but excluding bibliography and acknowledgments. Submissions shall not exceed twelve (12) pages total under any circumstance.
1. Extended abstracts summarizing preliminary/published results:
Extended abstracts will be evaluated separately and will not be included in the published proceedings; they are intended to propose timely communications of novel work that will be formally submitted elsewhere at a later stage, and/or of already published work that would be of interest to the PAW-ATM audience in terms of topic and timeliness. Extended abstracts shall not exceed four (4) pages.
See https://sourceryinstitute.github.io/PAW/ for further details.
WORKSHOP CHAIR
* Karla Morris - Sandia National Laboratories
ORGANIZING COMMITTEE
* Engin Kayraklioglu - Hewlett Packard Enterprise
* Irene Moulitsas - Cranfield University
* Elliott Slaughter - SLAC National Accelerator Laboratory
PROGRAM COMMITTEE CO-CHAIRS
* Bill Long - Hewlett Packard Enterprise
* Daniele Lezzi - Barcelona Supercomputing Center
PROGRAM COMMITTEE
* Dan Bonachea - Lawrence Berkeley National Laboratory
* Jan Ciesko - Sandia National Laboratories
* Iacopo Colonnelli - University of Turin
* Mario Di Renzo - University of Salento and Stanford University
* Salvatore Filippone - Universita di Roma Tor Vergata
* Magne Haveraaen - University of Bergen
* Peter Hawkins - Google
* Engin Kayraklioglu - Hewlett Packard Enterprise
* Jannis Klikenberg - RWTH Aachen University
* Daniele Lezzi - Barcelona Supercomputing Center
* Bill Long - Hewlett Packard Enterprise
* Francesc Lordan - Barcelona Supercomputing Center
* Lee Margetts - University of Manchester
* Fabrizio Marozzo - University of Calabria
* Josh Milthorpe - Australian National University
* Henry Monge Camacho - Oak Ridge National Laboratory
* Karla Morris - Sandia National Laboratories
* Irene Moulitsas - Cranfield University
* Elliott Slaughter - SLAC National Accelerator Laboratory
* Kenjiro Taura - University of Tokyo
* Miwako Tsuji - RIKEN Advanced Institute for Computational Science
ADVISORY COMMITTEE
* Bradford L. Chamberlain - Hewlett Packard Enterprise
* Damian W. I. Rouson - Lawrence Berkeley National Laboratory
* Katherine A. Yelick - Lawrence Berkeley National Laboratory
ARTIFACT EVALUATION COMMITTEE CHAIR
* Irene Moulitsas - Cranfield University
ARTIFACT EVALUATION COMMITTEE MEMBERS
* Scott Baden - University of California San Diego
* Desmond Bisandu - Cranfield University
* Valentin Churavy - Massachusetts Institute of Technology
* Fabio Durastante - University of Pisa
* Yakup Koray Budanaz - Technical University of Munich
* Boyu Kuang - Cranfield University
* Soren Rasmussen - National Center for Atmospheric Research
* Anjiang Wei - Stanford University
IMPORTANT DATES
* Submissions deadline: July 24, 2023
* Manuscripts review period: August 2-23, 2023
* Notification to authors: August 31, 2023
* Updated AD/AE appendix due from authors: September 4, 2023
* PAW-ATM workshop date: November 13, 2023
*** Apologies for Cross-Posting ***
-----------------------------------------------------------------------
CFP *Last Call* - EAI CloudComp 2023
The 12th International Conference on Cloud Computing
(EAI CloudComp 2023)
https://cloudcomp.eai-conferences.org/2023/
25-26, October, 2023, Hong Kong, Hong Kong S.A.R.
-----------------------------------------------------------------------
*Submission Deadline: 20 May, 2023 (AoE)*
-----------------------------------------------------------------------
# Scope #
As a novel computing paradigm, cloud computing has changed the world
tremendously in the past two decades. It offers scalable computing and
storage resources as services to support various applications cost-
effectively and flexibly. It has also promoted the proliferation of big
data analytics and deep learning in the last decade. Currently, cloud
computing is continuing to power and drive technological development
and advances in various areas such as industrial manufacturing, medical
healthcare, intelligent transportation, smart city, e-government,
e-commerce, and e-finance, etc. It is also the backbone for emerging
edge/fog computing technologies and Internet of Things (IoT) applications,
which constitute a rapidly-evolving ecosystem that profoundly changes
the world’s IT landscape and people’s lives.
CloudComp 2023 aims to bring together researchers and industry developers
to discuss and share their recent viewpoints and contributions to cloud
computing, It aims to present recent theories, experiences, and results
obtained in a wide area relevant to computing, giving researchers and
industrial practitioners an opportunity to gain in-depth insight on the
current and futuristic cloud computing technologies.
# Topics #
We welcome contributions from the following fields:
- Cloud Architecture
- • X as a Service Paradigms
- • Cloud Computing Architecture
- • Public/Private/Hybrid Cloud
- • Edge Computing / Fog Computing
- • IoT/Sensor Clouds
- • Simulation and Virtualization in Cloud Computing
- • Cloud Data Management Architecture
- Cloud Management
- • Cloud Resource Scheduling and Computation Offloading
- • Edge Resource Management and Optimization
- • Quality of Service (QoS) in Cloud Computing
- • Cost Models and Optimization in Cloud Computing
- • Privacy, Security and Trust Issues in Cloud Computing
- • Heterogeneity in Cloud Computing
- • Energy-saving Issue in Cloud Computing
- • Cloud Data Management
- Cloud Applications
- • Cloud Computing for Big Data Processing
- • High Performance Computing based on Cloud
- • Cloud Supported Smart X Technologies
- • Distributed and Parallel Query Processing
- • 5G and Cloud Computing
- • Blockchain Techniques for Cloud Computing
- • Edge Computing Applications
# Publication #
All registered papers will be submitted for publishing by Springer and
made available through *Springer* Link Digital Library.
Proceedings will be submitted for inclusion in leading indexing services,
such as Web of Science, Compendex, Scopus, DBLP, EU Digital Library,
IO-Port, MatchSciNet, Inspec and Zentralblatt MATH.
Authors of selected papers will be invited to submit an extended version
to:
• Wireless Networks (WINET) Journal [IF: 2.602 (2020)]
All accepted authors are eligible to submit an extended version in a fast
track of:
• EAI Endorsed Transactions on Cloud Systems
• EAI Endorsed Transactions on Scalable Information Systems
Additional publication opportunities:
• Sensors
• EAI Transactions series (Open Access)
• EAI/Springer Innovations in Communications and Computing Book Series
(titles in this series are indexed in Ei Compendex, Web of Science &
Scopus)
-----------------------------------------------------------------------
# Paper Submission #
Papers should be submitted through EAI ‘Confy+‘ system, and have to
comply with the Springer format (see Author’s kit section).
• Regular papers should be up to 12-15+ pages in length.
• Short papers should be 6-11 pages in length.
.
Please note that regular papers should be 10-20 pages in length.
This conference uses single-blind review model. Reviewers are .
anonymous to authors, names and affiliations of authors are shown
to the Program Committee.
All conference papers undergo a thorough peer review process prior
to the final decision and publication. This process is facilitated
by experts in the Technical Program Committee during a dedicated
conference period. Standard peer review is enhanced by EAI Community
Review which allows EAI members to bid to review specific papers.
All review assignments are ultimately decided by the responsible
Technical Program Committee Members while the Technical Program
Committee Chair is responsible for the final acceptance selection.
You can learn more about Community Review here.
# Important Dates #
Full Paper Submission deadline
20 May 2023
Notification deadline
30 July 2023
Camera-ready deadline
15 September 2023
Start of Conference
25 October 2023
End of Conference
26 October 2023
# Organisation #
General Chairs
- Wenjuan Li, The Hong Kong Polytechnic University, HK
Technical Program Committee Chairs
- Jiannong Cao, Hong Kong Polytechnic University, HK
- Weizhi Meng, Technical University of Denmark, DK
Technical Program Committee
https://cloudcomp.eai-conferences.org/2023/organizing-committee/
(Our apologies if you received multiple copies of this CFP)
**************************************************************************
WSCC 2023: Euro-Par 2023 International Workshop
International Workshop on Scalable Compute Continuum
Date: 28 August - September 2023
Location: Limassol, Cyprus
Workshop web page: https://wscc2023.di.unipi.it/
Euro-Par web page: https://2023.euro-par.org/
**************************************************************************
* Call for Papers
The “Compute Continuum” paradigm promises to manage the heterogeneity and
dynamism of widespread
computing resources, aiming to simplify the execution of distributed
applications improving data
locality, performance, availability, adaptability, energy management as
well as other non-functional
features. This is made possible by overcoming resource fragmentation and
segregation in tiers,
enabling applications to be seamlessly executed and relocated along a
continuum of resources spanning
from the edge to the cloud. Besides consolidated vertical and horizontal
scaling patterns, this paradigm
also offers more detailed adaptation actions that strictly depend on the
specific infrastructure components
(e.g., to reduce energy consumption, or to exploit specific hardware such
as GPUs and FPGAs). This enables
the enhancement of latency-sensitive applications, the reduction of network
bandwidth consumption,
the improvement of privacy protection, and the development of novel
services aimed at improving living,
health, safety, and mobility. All of this should be achievable by
application developers without having
to worry about how and where the developed components will be executed.
Therefore, to unleash the true
potential offered by the Compute Continuum, proactive, autonomous, and
infrastructure-aware management
is desirable, if not mandatory, calling for novel interdisciplinary
approaches that exploit optimization
theory, control theory, machine learning, and artificial intelligence
methods.
In this landscape, the workshop is willing to attract contributions in the
area of distributed systems with
particular emphasis on support for geographically distributed platforms and
autonomic features to deal with
variable workloads and environmental events, and take the best of
heterogeneous and distributed infrastructures.
A partial list of interesting topics of this workshop is the following:
- Scalable architectures and systems for Compute Continuum
- System software for cloud-edge-IoT orchestration
- Distributed and decentralized management of resources and application
deployment in the Compute Continuum
- Programming models, languages and patterns for the Compute Continuum
- Compute Continuum performance modeling and analysis
- Compute Continuum as a service
- Energy-efficient solutions for sustainable Compute Continuum
- AI in the Compute Continuum
- Scalable applications for Compute Continuum (IoT, microservices,
serverless)
- Data-intensive and stream processing systems and applications in the
Compute Continuum
- Digital Twins and industry applications in the Compute Continuum
- Prototypes and real-life experiments involving Compute Continuum
- Benchmarks and experimental platforms for reproducible experiments in the
Compute Continuum
* Submission Instructions
The papers should be formatted according to the LNCS guidelines. They
should be between a minimum of 10
and a maximum of 12 pages.
* Special Issue
Selected papers will be invited to submit an extended version to the
Special Issue of the Elsevier Future Generation Computer Systems on
“Scalable Compute Continuum”. For important dates and additional info, see
the related Call for Papers:
https://www.sciencedirect.com/journal/future-generation-computer-systems/ab…
* Important Dates
May 19th, 2023 Paper submission deadline
June 19th, 2023 Paper acceptance notifications
July 2nd, 2023 Camera-ready due
* Workshop Co-Chairs
- Valeria Cardellini, University of Rome Tor Vergata, Italy
- Patrizio Dazzi, University of Pisa, Italy
- Gabriele Mencagli, University of Pisa, Italy
- Matteo Nardelli, Bank of Italy, Italy
- Massimo Torquati, University of Pisa, Italy
Looking forward to receiving your excellent submissions soon.
Best regards,
Valeria Cardellini, Patrizio Dazzi, Gabriele Mencagli, Matteo Nardelli, and
Massimo Torquati