posted by organizer: shashikantilager || 3726 views || tracked by 2 users: [display]

BigDataPipelines 2019 : International Workshop on Multi-tier Big Data Pipelines from Edge to the Cloud Data Centers @HiPC 2019


When Dec 17, 2019 - Dec 20, 2019
Where Hyderabad, India
Submission Deadline Sep 20, 2019
Notification Due Oct 14, 2019
Final Version Due Sep 28, 2019
Categories    fog computing   cloud computing   big data   artificial intelligence

Call For Papers

Today, a huge amount of data is being generated by the Internet of Things (IoT), such as smartphones, sensors, cameras, cars, and robots. In order to process the generated data, there exist Big Data platforms (such as Hadoop and Spark). Conventionally, they are deployed in centralized Data Centers, which, however, fails short of addressing time-critical requirements of the applications due to high latency between the Edge, where the data are generated and the Data Centers where they are processed. The emerging Edge/Fog computing paradigm promises to solve this problem by seamlessly integrating hardware and software resources across multiple computing tiers, from the Edge to the Data Center/Cloud. Since computing resources at the Edge may be power and capacity constrained, it is necessary to invent new lightweight platforms and techniques that seamlessly interact, sense, execute and produce results with very low latency, while at the same time address other high-level requirements of applications, such as security and privacy.

Regarding these problems there are many challenges that must be addressed with the invention of new architectures, methods, algorithms and solutions that:

Integrate and process data from underlying IoT platforms and services
Smartly select data streams for processing
Address the 4 ā€œVā€ of the Big Data problem: volume, variety, velocity and veracity
Improve the energy efficient management of resources and tasks processing
Address the QoS and time-critical aspects of smart applications
Facilitate intelligent integration of information arising from various sources
Address the requirements of very dynamic Big Data pipelines (e.g. moving smartphones, sensors, cars, robots with dynamically changing requirements for processing)
Provide orchestration methods and scheduling policies that address dependability, reliability, availability and other high-level application requirements
Adequately address the inherent variability of resources from the Edge to the Data Centers
Provide new architectures which use the powerful computing resources of Data Centers, while at the same time providing optimal QoS to applications
Address the decentralisation aspects through the use of Blockchain-based Smart Contracts and Oracles
Implement distributed Artificial Intelligence methods from the Edge to the Data Center/Cloud

Special Issue of the Software: Practice and Experience journal
Selected best papers will be invited to submit to a Special Issue of the Software: Practice and Experience journal.

The workshop aims to bring together scientists and practitioners interested in the intricacies of the implementation of large-scale Big Data Pipelines. Our intention is to discuss various problems, challenges, new approaches and technologies addressing this hot new area of research. The idea is to shortlist the most challenging problems, to shape future directions for research, foster the exchange of ideas, standards and common requirements. We look for high-quality work that addresses various aspects of the investigated problem.

Manuscript Guidelines:
Submitted manuscripts should be structured as technical papers and may not exceed six (6) single-spaced double-column pages using 10-point size font on 8.5 Ɨ 11 inch pages (IEEE conference style), including figures, tables, and references. See IEEE style templates at this page for details.
Electronic submissions must be in the form of a readable PDF file. All manuscripts will be reviewed by the Program Committee and evaluated on originality, relevance of the problem to the conference theme, technical strength, rigor in analysis, quality of results, and organization and clarity of presentation of the paper.
Submitted papers must represent original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines will be rejected without review and further action may be taken, including (but not limited to) notifications sent to the heads of the institutions of the authors and sponsors of the conference.
Presentation of an accepted paper at the workshop is a requirement of publication. Any paper that is not presented at the conference will not be included in the proceedings.

Important Dates:
Paper Submission- September 20th, 2019
Notification to Authors- October 14th, 2019
Workshop camera-ready- October 28th, 2019
Workshop Date: December 17-20, 2019

Submission Portal
Easychair Submission Link:

Related Resources

U2BigData 2020   User Understanding from Big Data Workshop - in conjunction with IEEE Big Data 2020
ACM--ESSE--Ei Compendex, Scopus 2020   ACM--2020 European Symposium on Software Engineerings (ESSE 2020)--Ei Compendex, Scopus
ICDM 2021   21th Industrial Conference on Data Mining
ICSC 2021   International Conference on Semantic Computing
FAIML 2020-Ei Compendex & Scopus 2020   2020 2nd International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML 2020)
CCBD--Ei Compendex & Scopus 2021   2021 The 8th International Conference on Cloud Computing and Big Data (CCBD 2020)--Ei Compendex & Scopus
EI-JCRAI 2020   2020 International Joint Conference on Robotics and Artificial Intelligence (JCRAI 2020)
CCBD--Ei & Scopus 2021   2021 The 8th International Conference on Cloud Computing and Big Data (CCBD 2021)--Ei Compendex & Scopus
ITAS--EI Compendex, Scopus 2021   2021 Information Technology & Applications Symposium (ITAS 2021)--EI Compendex, Scopus
StoryCase 2012   ICCBR-12 Workshop on Stories, Episodes, and Cases (StoryCase)