posted by system || 2380 views || tracked by 4 users: [display]

SWEET 2014 : 3rd International Workshop on Scalable Workflow Enactment Engines and Technologies

FacebookTwitterLinkedInGoogle

Link: http://sites.google.com/site/sweetworkshop2014
 
When Jun 22, 2014 - Jun 22, 2014
Where Snowbird, Utah, USA
Submission Deadline Mar 20, 2014
Categories    databases
 

Call For Papers

One of the goals of computer system engineering has always been to develop systems that are easy to use and understand, but at the same time put great computational power at the fingertips of the end users. The developments in Big data processing where this is made more accessible for programmers and non-programmers create the potential for making this a realistic goal in the area of data analytics and scientific data processing, by enabling simple access to large pools of data storage and computational resources. More specifically, the creation of Big data processing frameworks with declarative workflow languages is facilitating the convergence of data-intensive workflow-based processing with traditional data management, thereby providing users with the best of both worlds. Workflows are used extensively, both for data analytics and in computational science. Common to the broad range of workflow systems currently in use are their relatively simple programming models, which are usually exposed through a visual programming style, and are backed by a well-defined model of computation. While the flexibility of workflows for rapid prototyping of science pipelines makes them appealing to computational scientists, recent applications of workflow technology to data-intensive science shows the need for a robust underlying data management infrastructure. At the same time, on the data management side of science and data analytics, workflow-like models and languages are beginning to emerge, to make it possible for users with no application development resources but close to the data domain, to assemble complex data processing pipelines.

Workshop Focus and Goals
The SWEET 2014 workshop is the third in a series that began with SWEET 2012. The original idea for the workshop comes from the observation that rapid progress in models and patterns for cloud computing is facilitating a new generation of hybrid database / workflow systems for addressing large data processing problems in a scalable way.

The collection of papers and talks from SWEET has so far confirmed that such hybrids are emerging not only in e-science but also for Web-scale data processing, i.e., at Google, Yahoo, and Twitter. At the same time, the core SWEET'13 papers report on robust research on core workflow features, including scheduling, distributed engines, and workflows for HPC architectures. With this in mind, the goal of the SWEET'14 workshop is to bring together researchers and practitioners to explore the potential of scalable processing of large data sets with applications organized as workflows. Following our previous calls we have identified the following specific areas of interest:

Architectures and performance: convergence of data processing pipeline and workflow processing, associated architectural and performance issues;
Best practices: best practices in data-intensive workflow models and programming paradigms for designing efficient, effective and reusable pipelines;
Usability: lowering barriers to entry into programming data-intensive pipelines, by for example offering declarative and/or graphical interfaces that assist actively in the design process.

Topics
The workshop aims to address issues of (i) Architectures and performance, (ii) Models and Languages, (iii) Applications of cloud-based workflows. The topics of the workshop include, but as usual, are not strictly limited to:

Architectures and performance:

architectures for data processing pipelines, data-intensive workflows, DAGs of MapReduce jobs, dataflows, and data-mashups,
cloud-based, scalable workflow enactment,
efficient data storage for data-intensive workflows,
optimizing execution of data-intensive workflows,
workflow scheduling in cloud computing.


Modelling for performance as well as usability:

languages for data processing pipelines, data-intensive workflows, dataflows, and data-mashups,
verification and validation of data-intensive workflows,
programming models for cloud computing,
access control and authorization models, privacy, security, risk and trust issues,
workflow patterns for data-intensive workflows,
interfaces for supporting the design and debugging of complex data-processing pipelines and workflows,
tools for supporting communities for exchanging data-processing pipelines and workflows.


Applications of cloud-based workflow:

big data analytics,
bioinformatics,
data mashups,
semantic web data management,
data-driven journalims.

Related Resources

AIC 2024   3rd 2024 IEEE World Conference on Applied Intelligence and Computing
SESBC 2024   5th International Conference on Software Engineering, Security and Blockchain
INCOFT 2024   3rd International Conference on Futuristic Technologies
BIOM 2024   4th International Conference on Big Data, IoT and Machine Learning
ICoIV 2024   2024 International Conference on Intelligent Vehicles (ICoIV 2024)
ADBIS 2024   28th European Conference on Advances in Databases and Information Systems
SUM 2024   16th International Conference on Scalable Uncertainty Management
NYC-2024-SE 2024   New York Annual Conference on Software Engineering 2024
ADMIT 2024   2024 3rd International Conference on Algorithms, Data Mining, and Information Technology (ADMIT 2024)
IT-Tage 2024   IT-Tage - IT-Konferenz für Software-Entwicklung, -Architektur, KI, Datenbanken, DevOps, Agile und Management