We apologize if you receive multiple copies of this notice.
-----------------------------------------------------------------------------
ScalA’17: 8th Workshop on Latest Advances in
Scalable Algorithms for Large-Scale Systems
held in conjunction with the
SC17: The International Conference on High Performance
Computing, Networking, Storage and Analysis
in cooperation with ACM SIGHPC
November 13, 2017, …
[View More]Denver, CO, USA
<http://www.csm.ornl.gov/srt/conferences/Scala/2017>
Novel scalable scientific algorithms are needed in order to enable key
science applications to exploit the computational power of large-scale
systems. This is especially true for the current tier of leading petascale
machines and the road to exascale computing as HPC systems continue to scale
up in compute node and processor core count. These extreme-scale systems
require novel scientific algorithms to hide network and memory latency, have
very high computation/communication overlap, have minimal communication, and
have no synchronization points.
Scientific algorithms for multi-petaflop and exa-flop systems also need to be
fault tolerant and fault resilient, since the probability of faults increases
with scale. Resilience at the system software and at the algorithmic level is
needed as a crosscutting effort. Finally, with the advent of heterogeneous
compute nodes that employ standard processors as well as GPGPUs, scientific
algorithms need to match these architectures to extract the most performance.
This includes different system-specific levels of parallelism as well as
co-scheduling of computation. Key science applications require novel
mathematical models and system software that address the scalability and
resilience challenges of current- and future-generation extreme-scale HPC
systems.
Submission Guidelines
Authors are invited to submit manuscripts in English structured as technical
papers not exceeding 8 letter size (8.5in x 11in) pages including figures,
tables, and references using the ACM format for conference proceedings.
Submissions not conforming to these guidelines may be returned without
review. Reference style files are available at
<http://www.acm.org/sigs/publications/proceedings-templates>.
All manuscripts will be reviewed and judged on correctness, originality,
technical strength, and significance, quality of presentation, and interest
and relevance to the workshop attendees. Submitted papers must represent
original unpublished research that is not currently under review for any
other conference or journal. Papers not following these guidelines will be
rejected without review and further action may be taken, including (but not
limited to) notifications sent to the heads of the institutions of the
authors and sponsors of the conference. Submissions received after the due
date, exceeding length limit, or not appropriately structured may also not
be considered. At least one author of an accepted paper must register for
and attend the workshop. Authors may contact the workshop program chair for
more information. Papers should be submitted electronically at:
<https://easychair.org/conferences/?conf=scala17>.
Full papers will be published with the SC'17 workshop proceedings in the ACM
Digital Library and IEEE Xplore. Selected papers will be invited for an
extended version in a special issue of the Journal of Computational Science
(JoCS).
Important Dates
- Full paper submission: August 28, 2017
- Notification of acceptance: September 11, 2017
- Final paper submission (firm): October 9, 2017
- Workshop/conference early registration: TBD
- Workshop: November 13, 2017
Topics of interest include, but are not limited to:
- Novel scientific algorithms that improve performance, scalability,
resilience, and power efficiency
- Porting scientific algorithms and applications to many-core and
heterogeneous architectures
- Performance and resilience limitations of scientific algorithms and
applications at scale
- Crosscutting approaches (system software and applications) in addressing
scalability challenges
- Scientific algorithms that can exploit extreme concurrency (e.g. 1 billion
for exascale by 2020)
- Naturally fault tolerant, self-healing, or fault oblivious scientific
algorithms
- Programming model and system software support for algorithm scalability and
resilience
Workshop Chairs
- Vassil Alexandrov, Barcelona Supercomputing Center, Spain
- Al Geist, Oak Ridge National Laboratory, USA
- Jack Dongarra, University of Tennessee, Knoxville, USA
Workshop Program Chair
- Christian Engelmann, Oak Ridge National Laboratory, USA
Program Committee
- Vassil Alexandrov, Barcelona Supercomputing Center, Spain
- Hartwig Anzt, University of Tennessee, Knoxville, USA
- Rick Archibald, Oak Ridge National Laboratory, USA
- Franck Cappello, Argonne National Laboratory and
University of Illinois at Urbana Champaign, USA
- Zizhong Chen, University of California, Riverside, USA
- James Elliott, Sandia National Laboratories, USA
- Nahid Emad, University of Versailles SQ, France
- Christian Engelmann, Oak Ridge National Laboratory, USA
- Wilfried Gansterer, University of Vienna, Austria
- Michael Heroux, Sandia National Laboratories, USA
- Kirk E. Jordan, IBM T.J. Watson Research, USA
- Dieter Kranzlmueller, Ludwig-Maximilians-University Munich, Germany
- Ignacio Laguna, Lawrence Livermore National Laboratory, USA
- Piotr Luszczek, University of Tennessee, Knoxville, USA
- Michael Mascagni, Florida State University, USA
- Ron Perrot, University of Oxford, UK
- Yves Robert, ENS Lyon, France
- Stuart Slattery, Oak Ridge National Laboratory, USA
- Keita Teranishi, Sandia National Laboratories, USA
--
Christian Engelmann, Ph.D.
R&D Staff Scientist
Computer Science Research Group
Computer Science and Mathematics Division
Oak Ridge National Laboratory
Mail: P.O. Box 2008, Oak Ridge, TN 37831-6173, USA
Phone: +1 (865) 574-3132 / Fax: +1 (865) 576-5491
e-Mail: engelmannc(a)ornl.gov / Home: www.christian-engelmann.info
[View Less]
We apologize if you receive multiple copies of this call for papers.
--------------------------------------------------------------------------------
10th Workshop on Resiliency in High Performance Computing (Resilience)
in Clusters, Clouds, and Grids
<http://www.csm.ornl.gov/srt/conferences/Resilience/2017>
in conjunction with
the 23rd International European Conference on Parallel and Distributed
Computing (Euro-Par), Santiago de …
[View More]Compostela, Spain,
August 28 - September 1, 2017
<http://europar2017.usc.es>
Overview:
Resilience is a critical challenge as high performance computing (HPC)
systems continue to increase component counts, individual component
reliability decreases (such as due to shrinking process technology and
near-threshold voltage (NTV) operation), and software complexity increases.
Application correctness and execution efficiency, in spite of frequent
faults, errors, and failures, is essential to ensure the success of the
extreme-scale HPC systems, cluster computing environments, Grid computing
infrastructures, and Cloud computing services.
While a fault (e.g., a bug or stuck bit) is the cause of an error, its
manifestation as a state change is considered an error (e.g., a bad value
or incorrect execution), and the transition to an incorrect service is
observed as a failure (e.g., an application abort or system crash). A
failure in a computing system is typically observed through an application
abort or a full/partial service or system outage. A detectable correctable
error is often transparently handled by hardware, such as a single bit flip
in memory that is protected with single-error correction double-error
detection (SECDED) error correcting code (ECC). A detectable uncorrectable
error (DUE) typically results in a failure, such as multiple bit flips in
the same addressable word that escape SECDED ECC correction, but not
detection, and ultimately cause an application abort. An undetectable error
(UE) may result in silent data corruption (SDC), e.g., an incorrect
application output. There are many other types of hardware and software
faults, errors, and failures in computing systems.
Resilience for HPC systems encompasses a wide spectrum of fundamental and
applied research and development, including theoretical foundations, fault
detection and prediction, monitoring and control, end-to-end data integrity,
enabling infrastructure, and resilient solvers and algorithm-based fault
tolerance. This workshop brings together experts in the community to further
research and development in HPC resilience and to facilitate exchanges
across the computational paradigms of extreme-scale HPC, cluster computing,
Grid computing, and Cloud computing.
Submission Guidelines:
Authors are invited to submit papers electronically in English in PDF
format. Submitted manuscripts should be structured as technical papers and
BETWEEN 10 AND 12 PAGES, including figures, tables and references, using
Springer's Lecture Notes in Computer Science (LNCS) format at
<http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0>. Papers with
less than 10 or more than 12 pages will not be accepted due to publisher
guidelines. Submissions should include abstract, key words and the e-mail
address of the corresponding author. Papers not conforming to these
guidelines may be returned without review. All manuscripts will be reviewed
and will be judged on correctness, originality, technical strength,
significance, quality of presentation, and interest and relevance to the
conference attendees. Submitted papers must represent original unpublished
research that is not currently under review for any other conference or
journal. Papers not following these guidelines will be rejected without
review and further action may be taken, including (but not limited to)
notifications sent to the heads of the institutions of the authors and
sponsors of the conference. Submissions received after the due date or not
appropriately structured may also not be considered. The proceedings
will be published in Springer's LNCS as post-conference proceedings. At
least one author of an accepted paper must register for and attend the
workshop for inclusion in the proceedings. Authors may contact the workshop
program chairs for more information.
Important websites:
- Resilience 2017 Website: <http://www.csm.ornl.gov/srt/conferences/Resilience/2017>
- Resilience 2017 Submissions: <https://easychair.org/conferences/?conf=europar2017workshops>
- Euro-Par 2017 website: <http://europar2017.usc.es>
Topics of interest include, but are not limited to:
- Theoretical foundations for resilience:
- Metrics and measurement
- Statistics and optimization
- Simulation and emulation
- Formal methods
- Efficiency modeling and uncertainty quantification
- Fault detection and prediction:
- Statistical analyses
- Machine learning
- Anomaly detection
- Data and information collection
- Visualization
- Monitoring and control for resilience:
- Platform and application monitoring
- Response and recovery
- RAS theory and performability
- Application and platform knobs
- Tunable fidelity and quality of service
- End-to-end data integrity:
- Fault tolerant design
- Degraded modes
- Forward migration and verification
- Fault injection
- Soft errors
- Silent data corruption
- Enabling infrastructure for resilience:
- RAS systems
- System software and middleware
- Programming models
- Tools
- Next-generation architectures
- Resilient solvers and algorithm-based fault tolerance:
- Algorithmic detection and correction of hard and soft faults
- Resilient algorithms
- Fault tolerant numerical methods
- Robust iterative algorithms
- Scalability of resilient solvers and algorithm-based fault tolerance
Important Dates:
- Workshop papers due: May 5, 2017
- Workshop author notification: June 16, 2017
- Workshop early registration: TBD
- Workshop paper (for informal workshop proceedings): July 21, 2017
- Workshop camera-ready papers: October 3, 2017
General Co-Chairs:
- Stephen L. Scott
Senior Research Scientist - Systems Research Team
Tennessee Tech University and Oak Ridge National Laboratory, USA
scottsl(a)ornl.gov
- Chokchai (Box) Leangsuksun,
SWEPCO Endowed Associate Professor of Computer Science
Louisiana Tech University, USA
box(a)latech.edu
Program Co-Chairs:
- Patrick G. Bridges
University of New Mexico, USA
bridges(a)cs.unm.edu
- Christian Engelmann
Oak Ridge National Laboratory , USA
engelmannc(a)ornl.gov
Program Committee:
- Ferrol Aderholdt, Oak Ridge National Laboratory, USA
- Dorian Arnold, University of New Mexico, USA
- Rizwan Ashraf, Oak Ridge National Laboratory, USA
- Wesley Bland, Intel Corporation, USA
- Hans-Joachim Bungartz, Technical University of Munich, Germany
- Franck Cappello, Argonne National Laboratory and
University of Illinois at Urbana-Champaign, USA
- Marc Casas, Barcelona Supercomputer Center, Spain
- Zizhong Chen, University of California at Riverside, USA
- Robert Clay, Sandia National Laboratories, USA
- Miguel Correia, Universidade de Lisboa, Portugal
- Nathan DeBardeleben, Los Alamos National Laboratory, USA
- James Elliott, Sandia National Laboratories, USA
- Kurt Ferreira, Sandia National Laboratories, USA
- Michael Heroux, Sandia National Laboratories, USA
- Saurabh Hukerikar, Oak Ridge National Laboratory, USA
- Dieter Kranzlmueller, Ludwig-Maximilians University of Munich, Germany
- Sriram Krishnamoorthy, Pacific Northwest National Laboratory, USA
- Ignacio Laguna, Lawrence Livermore National Laboratory, USA
- Scott Levy, University of New Mexico, USA
- Kathryn Mohror, Lawrence Livermore National Laboratory, USA
- Christine Morin, INRIA Rennes, France
- Dirk Pflueger, University of Stuttgart, Germany
- Nageswara Rao, Oak Ridge National Laboratory, USA
- Alexander Reinefeld, Zuse Institute Berlin, Germany
- Rolf Riesen, Intel Corporation, USA
- Yves Robert, ENS Lyon, France
- Thomas Ropars, Universite Grenoble Alpes, France
- Martin Schulz, Lawrence Livermore National Laboratory, USA
- Keita Teranishi, Sandia National Laboratories, USA
--
Christian Engelmann, Ph.D.
R&D Staff Scientist
Computer Science Research Group
Computer Science and Mathematics Division
Oak Ridge National Laboratory
Mail: P.O. Box 2008, Oak Ridge, TN 37831-6173, USA
Phone: +1 (865) 574-3132 / Fax: +1 (865) 576-5491
e-Mail: engelmannc(a)ornl.gov / Home: www.christian-engelmann.info
[View Less]
I apologize for any cross-posting of this announcement.
========================================================================================
Int. Workshop on High Performance Computing Systems for Bioinformatics and Life Sciences
(BILIS 2017)
http://hpcs2017.cisedu.info/conference/workshops---hpcs2017/workshop17-bilis
July 17 – July 21, 2017
Genoa, …
[View More]Italy
held in conjunction with
International Conference on High Performance Computing & Simulation (HPCS 2017)
http://hpcs17.cisedu.info/
========================================================================================
* * * CALL FOR PAPERS * * *
EXTENDED Submission Deadline: April 15, 2017
Submissions could be for full papers, short papers, poster papers, or posters
========================================================================================
IMPORTANT DATES
Paper Submissions: --------------------------------- April 15, 2017 - Extended
Acceptance Notification: --------------------------- April 28, 2017
Camera Ready Papers and Registration Due by: ------- May 11, 2017
Conference Dates: --------------------------------- July 17 – 21, 2017
========================================================================================
SCOPE AND OBJECTIVES
Incorporating new advancements of Information Technology (IT) in general and High Performance Computing (HPC) in particular in the domain of Life Sciences and Biomedical Research continues to receive tremendous attention of researchers, biomedical institutions and the rest of the biomedical community. Although medical instruments have benefited a great deal from the technological advances of the couple of decades, the impact of integrating IT advancements in addressing critical problems in biomedical research remains limited and the process of penetrating IT tools in the medical profession continues to be a very challenging problem. For example, the use of electronic medical records and Hospital Information Systems in improving health care remains fragmented. Similarly, the use of advanced computational tools seamlessly in the biomedical research cycle continues to be minimal.
Due to the computational intensive problems in life sciences, the marriage between the Bioinformatics domain and high performance computing is critical to the advancement of Biosciences. In addition, the problems in this domain tend to be highly parallelizable and deal with large datasets, hence using HPC is a natural fit. The Bioinformatics domain is rich in applications that require extracting useful information from very large and continuously growing sequence of databases. Most methods used for analyzing DNA/Protein sequences are known to be computationally intensive, providing motivation for the use of powerful computational systems with high throughput characteristics.
Moreover, high-throughput wet lab platforms such as next generation sequencing, microarray and mass spectrometry, are producing a huge amount of experimental "omics" data. The increasing availability of omics data poses new challenges to bioinformatics applications that need to face in a semi-automatic way an overwhelming availability of raw data. Main challenges regard: 1) the efficient storage, retrieval and integration of experimental data; 2) their efficient and high-throughput preprocessing and analysis; 3) the building of reproducible "in silico" experiments; 4) the integration of analysis results with pre-existing knowledge usually stored into ontologies.
As the storage, preprocessing and analysis of raw experimental data is becoming the main bottleneck of the analysis pipeline, parallel computing is playing an important role in all steps of the life sciences research pipeline, from raw data management and processing, to data integration and analysis, and to data exploration and visualization. Moreover, Cloud Computing is becoming the key technology to hide the complexity of computing infrastructures, to reduce the cost of the data analysis task, and especially to change the overall business model of biomedical research and health provision.
Considering the complex analysis pipeline of the biomedical research, the bottleneck is more and more moving toward the storage, integration, and analysis of experimental data, as well as their correlation and integration with publicly available data banks In such a scenario, large-scale distributed databases and parallel bioinformatics tools are key tools for organizing and exploring biological and biomedical data with the aim to discover new knowledge in biology and medicine.
In the current Information age, further progress of Medical Sciences requires successful integration with Computational and Information Sciences. The workshop attempts to attract innovative ways of how such integration can be achieved via Bioinformatics and Biomedical Informatics research, particularly in taking advantage of the new advancements in HPC systems. The focus of data analysis and data mining tools in biomedical research highlights the current state of research in the key biomedical research areas such as bioinformatics, medical informatics and biomedical imaging. Addressing performance concerns in managing and accessing medical data, while facilitating the ability to integrate and correlate different biomedical databases remains an outstanding problem in biomedical research. The amount of available biomedical data continues to grow in an exponential rate; however, the impact of utilizing such resources remains minimal. The development of innovative tools in HPC environments to integrate, analyze and mine such data sources is a key step towards achieving large impact levels.
The workshop focuses on topics related to the utilization of HPC systems and new models of parallel computing and cloud computing in problems related to Biomedical Informatics and Life Sciences, along with the use of data integration and data mining tools to support biomedical research and Health Care.
The BILIS Workshop topics include (but are not limited to) the following:
HPC for the Analysis of Biological Data
Bioinformatics Tools for Health Care
Parallel Algorithms for Bioinformatics Applications
Ontologies in Biology and Medicine
Integration and Analysis of Molecular and Clinical Data
Parallel Bioinformatics Algorithms
Algorithms and Tools for Biomedical Imaging and Medical Signal Processing
Energy Aware Scheduling Techniques for Large Scale Biomedical Applications
HPC for analyzing Biological Networks
Next Generation Sequencing and Advanced Tools for DNA Assembly
HPC for Gene, Protein/RNA Analysis and Structure Prediction
Identification of Biomarkers
Biomedical Visualization Tools
Efficient Clustering and Classification Algorithms
Correlation Networks in Biomedical Research
Data Mining Techniques in Biomedical Applications
Heterogeneous Data Integration
HPC systems for Ontology and Database Integration
Pattern Recognition and Search Tools in Biological and Clinical Databases
Ubiquitous Medical Knowledge Discovery and Exchange
HPC for Monitoring and Treatment Facilities
Drug Design and Modeling
Computer Assisted Surgery and Medical Procedures
Remote Patient Monitoring, Homecare Applications
Mobile and Wireless Healthcare and Biomedical Applications
Cloud Computing for Bioinformatics, Medicine, and Health Systems
INSTRUCTIONS FOR PAPER SUBMISSIONS
You are invited to submit original and unpublished research works on above and other topics related to HPC for Bioinformatics, Healthcare and Life Sciences. Submitted papers must not have been published or simultaneously submitted elsewhere. For Regular papers, please submit a PDF copy of your full manuscript, not to exceed 8 double-column formatted pages per template, and include up to 6 keywords and an abstract of no more than 400 words. Additional pages will be charged additional fee. Submission should include a cover page with authors' names, affiliation addresses, fax numbers, phone numbers, and all authors email addresses. Please, indicate clearly the corresponding author(s) although all authors are equally responsible for the manuscript. Short papers (up to 4 pages), poster papers and posters (please refer to http://hpcs2017.cisedu.info/1-call-for-papers-and-participation/call-for-po… for posters submission details) will also be considered. Please specify the type of submission you have. Please include page numbers on all preliminary submissions to make it easier for reviewers to provide helpful comments.
Submit a PDF copy of your full manuscript to the workshop organizers via email as attachments to Hesham Ali: hali(a)unomaha.edu, Mario Cannataro: cannataro(a)unicz.it. Acknowledgement will be sent within 48 hours of submission.
Only PDF files will be accepted, uploaded to the submission link above. Each paper will receive a minimum of three reviews. Papers will be selected based on their originality, relevance, significance, technical clarity and presentation, language, and references. Submission implies the willingness of at least one of the authors to register and present the paper, if accepted. At least one of the authors of each accepted paper will have to register and attend the HPCS 2017 conference to present the paper at the workshop.
PROCEEDINGS
Accepted papers will be published in the Conference proceedings. Instructions for final manuscript format and requirements will be posted on the HPCS 2017 Conference web site. It is our intent to have the proceedings formally published in hard and soft copies and be available at the time of the conference. The proceedings is projected to be included in the IEEE or ACM Digital Library and indexed in all major indexing services accordingly.
SPECIAL ISSUE
Plans are underway to have the best papers, in extended version, selected for possible publication in a journal as special issue. Detailed information will soon be announced and will be made available on the conference website.
If you have any questions about paper submission or the workshop, please contact the workshop organizers.
IMPORTANT DATES
Paper Submissions: ------------------------------------ April 15, 2017 - Extended
Acceptance Notification: ------------------------------ April 28, 2017
Camera Ready Papers and Registration Due by: ---------- May 11, 2017
Conference Dates: ------------------------------------ July 17 – 21, 2017
WORKSHOP ORGANIZERS
Prof. Hesham H. Ali
Department of Computer Science
College of Information Science and Technology
University of Nebraska at Omaha
Omaha, NE 68182 USA
Email: hesham(a)unomaha.edu
Prof. Mario Cannataro
Department of Medical and Surgical Sciences
University "Magna Græcia" of Catanzaro
Viale Europa (Località Germaneto)
88100 Catanzaro, Italy
Email: cannataro(a)unicz.it
[View Less]
Dear Colleagues:
We cordially invite you to share your latest research results at the
Complexis 2017 Conference:
3rd International Conference on Complexity, Future Information Systems
and Risk
http://www.complexis.org/
March 20 - 21, 2018
Funchal, Madeira / Portugal
Submission Deadline: October 16, 2017
---------------------------------
Call for Papers
---------------------------------
COMPLEXIS – the International Conference on Complexity, Future
Information Systems and …
[View More]Risk, aims at becoming a yearly meeting place
for presenting and discussing innovative views on all aspects of Complex
Information Systems, in different areas such as Informatics,
Telecommunications, Computational Intelligence, Biology, Biomedical
Engineering and Social Sciences. Information is pervasive in many areas
of human activity – perhaps all – and complexity is a characteristic of
current Exabyte-sized, highly connected and hyper dimensional,
information systems. COMPLEXIS 2018 is expected to provide an overview
of the state of the art as well as upcoming trends, and to promote
discussion about the potential of new methodologies, technologies and
application areas of complex information systems, in the academic and
corporate world.
Conference Areas:
1 - Complexity in Informatics, Automation and Networking
<http://www.complexis.org/CallForPapers.aspx#A1>
2 - Complexity in Biology and Biomedical Engineering
<http://www.complexis.org/CallForPapers.aspx#A2>
3 - Complexity in Social Sciences
<http://www.complexis.org/CallForPapers.aspx#A3>
4 - Complexity in Computational Intelligence and Future Information
Systems <http://www.complexis.org/CallForPapers.aspx#A4>
5 - Complexity in EDA, Embedded Systems, and Computer Architecture
<http://www.complexis.org/CallForPapers.aspx#A5>
6 - Network Complexity <http://www.complexis.org/CallForPapers.aspx#A6>
7 - Complexity in Risk and Predictive Modeling
<http://www.complexis.org/CallForPapers.aspx#A7>
In Cooperation with:
International Federation for Systems Research European Association for
Theoretical Computer Science
All papers presented at the conference venue will also be available at
the SCITEPRESS Digital Library <http://www.scitepress.org/DigitalLibrary/>.
Proceedings will be submitted for indexation by:
DBLP, Thomson-Reuters Conference Proceedings Citation Index, INSPEC, EI
and SCOPUS
[View Less]
Singapore University of Technology and Design (SUTD) is a young university which was established in collaboration with MIT. iTrust is a Cyber Security Research Center with about 15 multi-discipline faculty members from SUTD. It has the world’s best facilities in cyber-physical systems (CPS) including testbeds for Secure Water Treatment (SWaT), Water Distribution (WADI), Electric Power and Intelligent Control (EPIC), and IoT (See more info at https://itrust.sutd.edu.sg/research/testbeds/)
I am …
[View More]looking for PhD interns with interest in cyber-physical system security (IoT, autonomous vehicle, and power grid etc.), especially on the topics such as 1) Lightweight and low-latency crypto algorithms for CPS devices, 2) Resilient authentication of devices and data in CPS, 3) Advanced SCADA firewall to filter more sophisticated attacking packets in CPS, 4) Big data based threat analytics for detection of both known and unknown threats, 5) Attack mitigation to increase the resilience of CPS. The attachment will be at least 3 months. Allowance will be provided for local expenses.
Interested candidates please send your CV with a research statement to Prof. Jianying Zhou.
Contact: Prof. Jianying Zhou
Email: jianying_zhou(a)sutd.edu.sg
Home: http://jianying.space/
[View Less]
Special Issue on Parallel and Distributed Data Mining
Information Sciences, Elsevier
The sheer volume of new data, which is being generated at an increasingly fast pace, has already produced an anticipated data deluge that is difficult to challenge. We are in the presence of an overwhelming vast quantity of data, owing to how easy is to produce or derive digital data. Even the storage of this massive amount of data is becoming a highly demanding task, outpacing the current development of …
[View More]hardware and software infrastructure. Nonetheless, this effort must be undertaken now for the preservation, organization and long-term maintenance of these precious data. However, the collected data is useless without our ability fully understand and make use of it. Therefore, we need new algorithms to address this challenge.
Data mining techniques and algorithms to process huge amount of data in order to extract useful and interesting information have become popular in many different contexts. Algorithms are required to make sense of data automatically and in efficient ways. Nonetheless, even though sequential computer systems performance is improving, they are not suitable to keep up with the increase in the demand for data mining applications and the data size. Moreover, the main memory of sequential systems may not be enough to hold all the data related to current applications.
This Special Issue takes into account the increasing interest in the design and implementation of parallel and distributed data mining algorithms. Parallel algorithms can easily address both the running time and memory requirement issues, by exploiting the vast aggregate main memory and processing power of processors and accelerators available on parallel computers. Anyway, parallelizing existing algorithms in order to achieve good performance and scalability with regard to massive datasets is not trivial. Indeed, it is of paramount importance a good data organization and decomposition strategy in order to balance the workload while minimizing data dependences. Another concern is related to minimizing synchronization and communication overhead. Finally, I/O costs should be minimized as well. Creating breakthrough parallel algorithms for high-performance data mining applications requires addressing several key computing problems which may lead to novel solutions and new insights in interdisciplinary applications.
Moreover, increasingly the data is spread among different geographically distributed sites. Centralized processing of this data is very inefficient and expensive. In some cases, it may even be impractical and subject to security risks. Therefore, processing the data minimizing the amount of data being exchanged whilst guaranteeing at the same time correctness and efficiency is an extremely important challenge. Distributed data mining performs data analysis and mining in a fundamentally distributed manner paying careful attention to resource constraints, in particular bandwidth limitation, privacy concerns and computing power.
The focus of this Special Issue is on all forms of advances in high-performance and distributed data mining algorithms and applications. The topics relevant to the Special Issue include (but are not limited to) the following.
TOPICS OF INTEREST
Scalable parallel data mining algorithms using message-passing, shared-memory or hybrid programming paradigms
Exploiting modern parallel architectures including FPGA, GPU and many-core accelerators for parallel data mining applications
Middleware for high-performance data mining on grid and cloud environments
Benchmarking and performance studies of high-performance data mining applications
Novel programming paradigms to support high-performance computing for data mining
Performance models for high-performance data mining applications and middleware
Programming models, tools, and environments for high-performance computing in data mining
Map-reduce based parallel data mining algorithms
Caching, streaming, pipelining, and other optimization techniques for data management in high-performance computing for data mining
Novel distributed data mining algorithms
SUBMISSION GUIDELINES
All manuscripts and any supplementary material should be submitted electronically through Elsevier Editorial System (EES) at http://ees.elsevier.com/ins (http://ees.elsevier.com/ins). The authors must select as “SI:PDDM” when they reach the “Article Type” step in the submission process.
A detailed submission guideline is available as “Guide to Authors” at: http://www.elsevier.com/journals/information-sciences/0020-0255/guide-for-a….
IMPORTANT DATES
Submission deadline: December 1th, 2017
First round notification: March 1th, 2018
Revised version due: May 1st, 2018
Final notification: June 1st, 2018
Camera-ready due: July 1st, 2018
Publication tentative date: October 2018
Guest editors:
Massimo Cafaro, Email: massimo.cafaro(a)unisalento.it
University of Salento, Italy and Euro-Mediterranean Centre on Climate Change, Foundation
Italo Epicoco, Email: italo.epicoco(a)unisalento.it
University of Salento, Italy and Euro-Mediterranean Centre on Climate Change, Foundation
Marco Pulimeno, Email: marco.pulimeno(a)unisalento.it
University of Salento, Italy
-
************************************************************************************
Massimo Cafaro, Ph.D.
Associate Professor
Dept. of Engineering for Innovation
University of Salento, Lecce, Italy
Via per Monteroni
73100 Lecce, Italy
Voice/Fax +39 0832 297371
Web http://sara.unisalento.it/~cafaro
E-mail massimo.cafaro(a)unisalento.it
cafaro(a)ieee.org
cafaro(a)acm.org
CMCC Foundation
Euro-Mediterranean Center on Climate Change
Via Augusto Imperatore, 16 - 73100 Lecce
massimo.cafaro(a)cmcc.it
************************************************************************************
[View Less]
Special Issue on Parallel and Distributed Data Mining
Information Sciences, Elsevier
The sheer volume of new data, which is being generated at an increasingly fast pace, has already produced an anticipated data deluge that is difficult to challenge. We are in the presence of an overwhelming vast quantity of data, owing to how easy is to produce or derive digital data. Even the storage of this massive amount of data is becoming a highly demanding task, outpacing the current development of …
[View More]hardware and software infrastructure. Nonetheless, this effort must be undertaken now for the preservation, organization and long-term maintenance of these precious data. However, the collected data is useless without our ability fully understand and make use of it. Therefore, we need new algorithms to address this challenge.
Data mining techniques and algorithms to process huge amount of data in order to extract useful and interesting information have become popular in many different contexts. Algorithms are required to make sense of data automatically and in efficient ways. Nonetheless, even though sequential computer systems performance is improving, they are not suitable to keep up with the increase in the demand for data mining applications and the data size. Moreover, the main memory of sequential systems may not be enough to hold all the data related to current applications.
This Special Issue takes into account the increasing interest in the design and implementation of parallel and distributed data mining algorithms. Parallel algorithms can easily address both the running time and memory requirement issues, by exploiting the vast aggregate main memory and processing power of processors and accelerators available on parallel computers. Anyway, parallelizing existing algorithms in order to achieve good performance and scalability with regard to massive datasets is not trivial. Indeed, it is of paramount importance a good data organization and decomposition strategy in order to balance the workload while minimizing data dependences. Another concern is related to minimizing synchronization and communication overhead. Finally, I/O costs should be minimized as well. Creating breakthrough parallel algorithms for high-performance data mining applications requires addressing several key computing problems which may lead to novel solutions and new insights in interdisciplinary applications.
Moreover, increasingly the data is spread among different geographically distributed sites. Centralized processing of this data is very inefficient and expensive. In some cases, it may even be impractical and subject to security risks. Therefore, processing the data minimizing the amount of data being exchanged whilst guaranteeing at the same time correctness and efficiency is an extremely important challenge. Distributed data mining performs data analysis and mining in a fundamentally distributed manner paying careful attention to resource constraints, in particular bandwidth limitation, privacy concerns and computing power.
The focus of this Special Issue is on all forms of advances in high-performance and distributed data mining algorithms and applications. The topics relevant to the Special Issue include (but are not limited to) the following.
TOPICS OF INTEREST
Scalable parallel data mining algorithms using message-passing, shared-memory or hybrid programming paradigms
Exploiting modern parallel architectures including FPGA, GPU and many-core accelerators for parallel data mining applications
Middleware for high-performance data mining on grid and cloud environments
Benchmarking and performance studies of high-performance data mining applications
Novel programming paradigms to support high-performance computing for data mining
Performance models for high-performance data mining applications and middleware
Programming models, tools, and environments for high-performance computing in data mining
Map-reduce based parallel data mining algorithms
Caching, streaming, pipelining, and other optimization techniques for data management in high-performance computing for data mining
Novel distributed data mining algorithms
SUBMISSION GUIDELINES
All manuscripts and any supplementary material should be submitted electronically through Elsevier Editorial System (EES) at http://ees.elsevier.com/ins (http://ees.elsevier.com/ins). The authors must select as “SI:PDDM” when they reach the “Article Type” step in the submission process.
A detailed submission guideline is available as “Guide to Authors” at: http://www.elsevier.com/journals/information-sciences/0020-0255/guide-for-a….
IMPORTANT DATES
Submission deadline: December 1th, 2017
First round notification: March 1th, 2018
Revised version due: May 1st, 2018
Final notification: June 1st, 2018
Camera-ready due: July 1st, 2018
Publication tentative date: October 2018
Guest editors:
Massimo Cafaro, Email: massimo.cafaro(a)unisalento.it
University of Salento, Italy and Euro-Mediterranean Centre on Climate Change, Foundation
Italo Epicoco, Email: italo.epicoco(a)unisalento.it
University of Salento, Italy and Euro-Mediterranean Centre on Climate Change, Foundation
Marco Pulimeno, Email: marco.pulimeno(a)unisalento.it
University of Salento, Italy
-
************************************************************************************
Massimo Cafaro, Ph.D.
Associate Professor
Dept. of Engineering for Innovation
University of Salento, Lecce, Italy
Via per Monteroni
73100 Lecce, Italy
Voice/Fax +39 0832 297371
Web http://sara.unisalento.it/~cafaro
E-mail massimo.cafaro(a)unisalento.it
cafaro(a)ieee.org
cafaro(a)acm.org
CMCC Foundation
Euro-Mediterranean Center on Climate Change
Via Augusto Imperatore, 16 - 73100 Lecce
massimo.cafaro(a)cmcc.it
************************************************************************************
[View Less]
=============
Complexity (Impact Factor 4.621)
Special Issue on Social Big Data: Mining, Applications and Beyond
Submission deadline: Friday 29 December 2017
Publication date: May 2018
Online CFP: https://www.hindawi.com/journals/complexity/si/148573/cfp/
Journal URL: https://www.hindawi.com/journals/complexity/
============
The social nature of Web 2.0 leads to the unprecedented growth of social
media sites such as discussion forums, product review sites, microblogging,
…
[View More]social networking and social curation. Existing research in social media
data mining has focused on techniques for extracting information for
specific applications from separate social media sources.
The mobile network and the Internet of Things are transforming what is
means to be social online. Humans, everyday objects and smart devices
interact and form an intelligent social network that is a highly adaptive
complex system. Assisted by personal devices, people can access real-time
traffic, weather and news event information and exchange such information
through social interaction and form communities dynamically. The rich user-
and device-generated data and user interactions generate complex social big
data that is different from classical structured attribute-value data. The
data objects take various forms including unstructured text, geo-tagged
data objects and data object streams. The social networks formed from
interactions among data objects also carry rich information for analyzing
user behavior. Such complex social big data calls for cross disciplinary
research from data mining, machine learning, pervasive and ubiquitous
computing, network science, and computational social science.
We seek contributions to advance our knowledge in social big data mining
and analytics and extend the knowledge to related disciplines. We
especially welcome methodological papers that address the data complexity
and application papers that promote wider and deeper applications of social
media data. Potential topics include, but are not limited to:
• Personal device and content integrated social data mining
• Mining dynamic complex social networks of humans and
devices
• Mining heterogeneous streams of social media data objects
from humans and devices
• Spatiotemporal analysis of social data and social networks
• Privacy preserving social data mining
• Social influence and community discovery in dynamic social
networks
• Social data mining for community-based recommendation and
other applications
• Trust and information credibility analysis of social media
data
• Mining for user social influence and communities in
complex social networks of humans and devices
• Mining social data for smart cities and smart nations
• Humans as sensors for event detection and disaster
management
• Sentiment analysis and opinion mining for social good
• Detection of opinion spam, illicit behavior and anomalies
in social media
• Social media data mining for public health and healthcare
Papers are published upon acceptance, regardless of the Special Issue
publication date.
SUBMISSION
Authors must submit their papers online at
https://mts.hindawi.com/submit/journals/complexity/sbdmab/
Authors are welcome to discuss their potential submissions with the editors
by sending an email to Xiuzhen Zhang (xiuzhen.zhang(a)rmit.edu.au) regarding
the fit of their paper for this special issue.
GUEST EDITORS
Xiuzhen Zhang, RMIT University, Melbourne, Australia;
xiuzhen.zhang(a)rmit.edu.au
Shuliang Wang, Beijing Institute of Technology, Beijing, China;
slwang(a)bit.edu.cn
Gao Cong, Nanyang Technological University, Singapore; gaocong(a)ntu.edu.sg
[View Less]
** Submission Deadline Extended: October 14, 2017 **
[Apologies if you receive multiple copies of this cfp]
****************************************************************************
***
SIMPDA 2017
SEVENTH INTERNATIONAL SYMPOSIUM ON DATA-DRIVEN PROCESS DISCOVERY AND
ANALYSIS
6-8 DECEMBER, 2017 - NEUCHATEL, SWITZERLAND
http://simpda2017.di.unimi.it
****************************************************************************
***
## About SIMPDA
With the increasing automation of business …
[View More]processes, growing amounts of
process data become available. This opens new research opportunities for
business process data analysis, mining and modeling. The aim of the IFIP 2.6
International Symposium on Data-Driven Process Discovery and Analysis is to
offer a forum where researchers from different communities and the industry
can share their insight in this hot new field.
The Symposium will feature a number of keynotes illustrating advanced
approaches, shorter presentations on recent research, a competitive PhD
seminar and selected research and industrial demonstrations. This year the
symposium will be held in Neuchatel.
###Call for Papers
The IFIP International Symposium on Data-Driven Process Discovery and
Analysis (SIMPDA 2017) offers a unique opportunity to present new approaches
and research results to researchers and practitioners working in business
process data modelling, representation and privacy-aware analysis.
The symposium will bring together leading researchers, engineers and
scientists from around the world. Full papers must not exceed 15 pages.
Short papers are limited to at most 4 pages. All papers must be original
contributions, not previously published or under review for publication
elsewhere. All contributions must be written in English and must follow the
LNCS Springer Verlag format. Templates can be downloaded from:
http://www.springer.de/comp/lncs/authors.html
Accepted papers will be published in a pre-proceeding volume of CEUR
workshop series. The authors of the accepted papers will be invited to
submit extended articles to a post-symposium proceedings volume which will
be published in the LNBIP series (Lecture Notes in Business Information
Processing, http://www.springer.com/series/7911), scheduled for late 2018
(extended papers length will be between 7000 and 9000 words). Around 10-15
papers will be selected for publication after a second round of review.
### Topics
Topics of interest for submission include, but are not limited to:
- Business Process Modeling languages, notations and methods
- Lightweight Process Model
- Data-aware and data-centric approaches
- Process Mining with Big Data
- Variability and configuration of process models
- Process simulation and static analyses
- Process data query languages
- Process data mining
- Privacy-aware process data mining
- Process metadata and semantic reasoning
- Process patterns and standards
- Foundations of business process models
- Resource management in business process execution
- Process tracing and monitoring
- Process change management and evolution
- Business process lifecycle
- Case studies and experience reports
- Social process discovery
- Crowdsourced process definition and discovery
### Workshop Format:
In accordance to our historical tradition of proposing SIMPDA as a
symposium, we propose an innovative format for this workshop:
The number of sessions depend on the number of submissions but, considering
the previous editions, we envisage to have four sessions, with 4-5 related
papers assigned to each session. A special session (with a specific review
process) will be dedicated to discuss research plan from PhD students.
Papers are pre-circulated to the authors that will be expected to read all
papers in advance but to avoid exceptional overhead, two are assigned to be
prepared with particular care, making ready comments and suggestions.
The bulk of the time during each session will be dedicated to open
conversations about all of the papers in a given session, along with any
linkages to the papers and discussions within an earlier session.
The closing session (30 minutes), will include a panel about open challenges
during which every participant will be asked to assemble their
thoughts/project ideas/goals/etc that they got out of the workshop.
### Call for PhD Research Plans
The SIMPDA PhD Seminar is a workshop for Ph.D. students from all over the
world. The goal of the Seminar is to help students with their thesis and
research plans by providing feedback and general advice on how to use their
research results.
Students interested in participating in the Seminar should submit an
extended abstract describing their research. Submissions can relate to any
aspect of Process Data: technical advances, usage and impact studies, policy
analyses, social and institutional implications, theoretical contributions,
interaction and design advances, innovative applications, and social
implications.
Research plans should be at most of 5 page long and should be organised
following the following structure:
- Abstract: summarises, in 5 line, the research aims and significance.
- Research Question: defines what will be accomplished by eliciting the
relevant the research questions.
- Background: defines the background knowledge providing the 5 most relevant
references (papers or books).
- Significance: explains the relevance of the general topic and of the
specific contribution.
- Research design and methods: describes and motivates the method adopted
focusing on: assumptions, solutions, data sources, validation of results,
limitations of the approach.
- Research stage: describes what the student has done so far.
### SIMPDA PhD award
A doctoral award will be given by the SIMPDA PhD Jury to the best research
plan submitted.
Student Scholarships
An application for a limited number of scholarships aimed at students coming
from emerging countries has been submitted to IFIP.
In order to apply, please contact paolo.ceravolo(a)unimi.it
### CALL for Demonstrations and Posters
Demonstrations showcase innovative technology and applications, allowing for
sharing research work directly with colleagues in a high-visibility setting.
Demonstration proposals should consist of a title, an extended abstract, and
contact information for the authors, and should not exceed 10 pages.
Posters allow the presentation of late-breaking results in an informal,
interactive manner. Poster proposals should consist of a title, an extended
abstract, contact information for the authors, and should not exceed 2
pages.
Accepted demonstrations and posters will be presented at the symposium.
Abstracts will appear in the proceedings.
### Important Dates
- Paper Submission: 14 October 2017
- Submission of PhD Presentations: 14 October 2017
- Notification of Acceptance: 19 November 2017
- Submission of Camera Ready Papers: 28 November 2017
- Second International Symposium on Process Data: 6-8 December 2017
- Post-proceeding submissions: 30 March 2018
## Keynote Speakers
* Enabling largely automated social media analytics *
Karl Aberer
Distributed Information Systems Laboratory (LSIR), EPFL
In this talk we will report on our recent advances in automating the
analysis of social media data. We first will review our recent work on a
platform for analysing social media data in terms of topics discussed,
communities and their influencers. This platform has been successfully used
in a number of practical use case. When using the platform we identified the
creation of domain-specific taxonomies as the main bottleneck in the
analysis process.
To tackle this issue we developed a novel method for taxonomy induction from
domain-specific document corpora. In the second part of the talk we will
discuss this method and some of the novel ideas that enabled us to produce
high quality domain-specific taxonomies on the fly.
## Organizers
### CHAIRS
- Paolo Ceravolo, Università degli Studi di Milano, Italy
- Maurice van Keulen, University of Twente, The Netherlands
- Kilan Stoffel, University of Neuchatel, Switzerland
### ADVISORY BOARD
- Ernesto Damiani, Università degli Studi di Milano, Italy
- Erich Neuhold, University of Vienna, Austria
- Philippe Cudré-Mauroux , University of Fribourg, Switzerland
- Robert Meersman, Graz University of Technology, Austria
- Wilfried Grossmann, University of Vienna, Austria
### Program Committee
- Akhil Kumar, Penn State University, USA
- Benoit Depaire, University of Hasselt, Belgium
- Chintan Mrit, University of Twente, The Netherlands
- Christophe Debruyne, Trinity College Dublin, Ireland
- Ebrahim Bagheri, Ryerson University, Canada
- Edgar Weippl, TU Vienna, Austria
- Fabrizio Maria Maggi, University of Tartu, Estonia
- George Spanoudakis, City University London, UK
- Haris Mouratidis, University of Brighton, UK
- Isabella Seeber, University of Innsbruck, Austria
- Jan Mendling, Vienna University of Economics and Business, Austria
- Josep Carmona, UPC - Barcelona, Spain
- Kristof Boehmer, University of Vienna, Austria
- Manfred Reichert, Ulm University, Germany
- Marcello Leida, TAIGER, Spain
- Mark Strembeck, WU Vienna, Austria
- Massimiliano De Leoni, Eindhoven TU, Netherlands
- Matthias Weidlich, Imperial College, UK
- Mazak Alexandra, University of Vienna, Austria
- Mohamed Mosbah, University of Bordeaux
- Mustafa Jarrar, Birzeit University, Palestine
- Robert Singer, FH Joanneum, Austria
- Roland Rieke, Fraunhofer SIT, Germany
- Schahram Dustdar, Vienna University of Technology, Austria
- Thomas Vogelgesang, University of Oldenburg, Germany
- Valentina Emilia Balas, University of Arad, Romania
- Wil Van der Aalst, Technische Universiteit Eindhoven, The Netherlands
[View Less]
We apologize if you receive multiple copies of this CFP.
This is the final extension to the submission deadline.
DataCloud 2017: The Eighth International Workshop on Data-Intensive Computing in the Clouds
Denver, CO, USA, November 12, 2017
Conference website
https://sites.google.com/view/2017datacloud/home
Submission link
https://easychair.org/conferences/?conf=datacloud2017
Submission deadline (final extension)
October 1, 2017
Applications and experiments in all areas of science are …
[View More]becoming increasingly complex and more demanding in terms of their computational and data requirements. Some applications generate data volumes reaching hundreds of terabytes and even petabytes. As scientific applications become more data intensive, the management of data resources and data flow between the storage and compute resources is becoming the main bottleneck. Analyzing, visualizing, and disseminating these large data sets has become a major challenge and data intensive computing is now considered as the “fourth paradigm” in scientific discovery after theoretical, experimental, and computational science.
The eighth international workshop on Data-intensive Computing in the Clouds (DataCloud 2017) will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running data-intensive computing workloads on Cloud Computing infrastructures. The DataCloud 2017 workshop will focus on the use of cloud-based technologies to meet the new data intensive scientific challenges that are not well served by the current supercomputers, grids or compute-intensive clouds. We believe the workshop will be an excellent place to help the community define the current state, determine future goals, and present architectures and services for future clouds supporting data intensive computing.
Submission Guidelines
Authors are invited to submit papers with unpublished, original work of not more than 8 pages of double column text using single spaced 10 point size on 8.5 x 11 inch pages, as per ACM 8.5 x 11 manuscript guidelines; document templates can be found at http://www.acm.org/sigs/publications/proceedings-templates. The final papers in PDF format must be submitted online at https://easychair.org/conferences/?conf=datacloud2017. Papers will be peer-reviewed, and accepted papers will be published in the workshop proceedings as part of the ACM digital library (in cooperation with SIGHPC). Submission implies the willingness of at least one of the authors to register and present the paper.
List of Topics
* Data-intensive cloud computing infrastructure, applications, characteristics and challenges
* Case studies of data intensive computing in the clouds
* Performance evaluation of data clouds, data grids, and data centers
* Energy-efficient data cloud design and management
* Data placement, scheduling, and interoperability in the clouds
* Accountability, QoS, and SLAs
* Data privacy and protection in a public cloud environment
* Distributed file systems for clouds
* Data streaming and parallelization
* New programming models for data-intensive cloud computing
* Scalability issues in clouds
* Social computing and massively social gaming
* 3D Internet and implications
* Future research challenges in data-intensive cloud computing
Committees
Program Chairs
* Tonglin Li, Oak Ridge National Laboratory
* Boyu Zhang, Microsoft Inc.
* Xuan Guo, Oak Ridge National Laboratory
Steering committee
* Wei Tang, Google Inc.
* Roger Barga, Microsoft Research
* Ian Foster, University of Chicago & ANL
* Geoffrey Fox, Indiana University
Program committee (to be confirmed)
* David Abramson, Monash University, Australia
* John Bent, Los Alamos National Laboratory
* Umit Catalyurek, Ohio State University
* Linhai Qiu, Google Inc.
* Abhishek Chandra, University of Minnesota
* Rong N. Chang, IBM Research
* Yong Chen, Texas Tech University
* Alok Choudhary, Northwestern University
* Jialin Liu, Lawrence Berkeley National Laboratory
* Brian Cooper, Google Inc.
* Ewa Deelman, University of Southern California
* Murat Demirbas, University at Buffalo
* Xu Yang, Amazon Inc.
* Zhou Zhou, Salesforce Inc.
* Kun Feng, Illinois Institute of Technology
* Anthony Kougkas, Illinois Institute of Technology
Contact
All questions about submissions should be emailed to lit1(a)ornl.gov, zhang.boyu84(a)gmail.com or guox(a)ornl.gov.
[View Less]