15-20 March 2015
Asia/Taipei timezone
get PDF of the programme
Physics (including HEP) and Engineering Applications
Submissions should report on experience with physics and engineering applications that exploit grid and cloud computing services, applications that are planned or under development, or application tools and methodologies. Topics of interest include:

•End-user data analysis
•Management of distributed data
•Applications level monitoring
•Performance analysis and system tuning
•Workload scheduling
•Management of an experimental collaboration as a virtual organization
•Comparison between grid and other distributed computing paradigms as enablers of physics data handling and analysis
•Expectations for the evolution of computing models drawn from recent experience handling extremely large and geographically diverse datasets.

Biomedicine & Life Sciences Applications
During the last decade, research in Biomedicine and Life Sciences has dramatically changed thanks to the continuous developments in High Performance Computing and highly Distributed Computing Infrastructures such as grids and clouds. This track aims at discussing problems, solutions and application examples related to this area of research, with a particular focus on non-technical end users.
Submissions should concentrate on practical applications and solutions in the fields of Biomedicine and Life Sciences, such as:

•Drug discovery
•Structural biology and / bioinformatics
•Medical imaging
•Public health applications / infrastructures
•High throughput data processing/analysis
•Integration of semantically diverse data sets and applications
•Cloud-based application examples
•Distributed data, computing and services
•Data management issues

Earth & Environmental Sciences & Biodiversity Applications
Today, it is well understood that precise, long-term observations are essential to quantify the patterns and trends of on-going environmental changes, and that continuously evolving models are needed to integrate our fundamental knowledge of processes with the geospatial and temporal information delivered by various monitoring activities. This makes it critically important that the environmental sciences community should put a strong emphasis on analysing the best practices and adopting common solutions on the management of heterogeneous data and data flows.

Natural and Environmental sciences are placing an increasing emphasis on the understanding of the Earth as a single, highly complex, coupled system with living and dead organisms. It is well accepted, for example, that the feedbacks involving oceanic and atmospheric processes can have major consequences for the long-term development of the climate system, which in turn affects biodiversity, natural hazards and can control the development of the cryosphere and lithosphere. Natural disaster mitigation is one of the most critical regional issues in Asia.

Despite the diversity of environmental sciences, many projects share the same significant challenges. These include the collection of data from multiple distributed sensors (potentially in very remote locations), the management of large low-level data sets, the requirement for metadata fully specifying how, when and where the data were collected, and the post-processing of those low-level data into higher-level data products which need to be presented to scientific users in a concise and intuitive form.

This session would in particular address how these challenges are being handled with the aids of e-Science paradigm.

Humanities, Arts, and Social Sciences (HASS) Application
Researchers working in the humanities, arts, and social sciences are using advanced computing infrastructures such as grids and clouds to address the grand challenges of their disciplines.

For example, social scientists are faced with a deluge of data from a range of sources such as social media, administrative data, sensor networks or transactional data that challenge the traditional ways to study and interpret the social world. Consequently, researchers are now using new digital methods to further our understanding of societies in the modern age and, in particular, our understanding of the role that information infrastructures play in the ongoing development of societies worldwide. Gaining new insights today often involves linking complementary datasets and models at local, national, regional and global scales.

Similarly, in the humanities and the arts, researchers from a wide range of disciplines are interested in managing; linking and analyzing distributed datasets and corpora. There has been a significant increase in the digital material available to researchers, through digitization programmes but also because more and more data is now “born digital”.

New digital research methods and technologies in the humanities, arts and social sciences, raise questions as to whether common models of usage exist that could be underpinned by a generic e-Infrastructure. Engagement with data and modeling is not only helping HASS researchers ask new questions of behavior and culture and history, but also provides common ground with sciences and engineering in grand challenge problems framing and solutions.

These sessions will focus on experiences made in developing e-Research methods and tools that go beyond single application demonstrators. Their wider applicability may be based on a set of common concerns, common approaches or reusable tools and services. We are also specifically inviting contributions concerned with teaching e-Research approaches at undergraduate and postgraduate levels as well as other initiatives to “bridge the chasm” between early adopters the majority of researchers.

Virtual Reserach Environment (including Middleware, tools, services, workflow, ... etc.)
Scientists are interested to find new knowledge and insights and are not interested to work with cumbersome solutions to use federated distributed computing and data resources. Virtual Resources Environments (VRE) offer a sophisticated solution to provide an intuitive, easy-to-use and secure access to federated resources hiding (up to the best or desired level) the complexity of the federated infrastructure and the heterogeneity of the resources and the middleware that brings them together. Behind the scenes, VREs comprise tool, middleware and portal technologies, workflow automation as well a security solutions. Topics of interest include but are not limited to:

•Real-world experiences with VREs to gain new scientific knowledge
•Middleware technologies, tools, services beyond state-of-the-art for VREs
•Innovative technologies to enable VREs also for modern devices like smartphone
•One-step-ahead workflow automation integrated in VREs

Data Management
Data management encompasses the organization, distribution, storage, access, validation, and processing of digital assets. Data management requirements can be characterized by data life stages that evolve from shared project collections, to formally published libraries, to archives of reference collections. Papers are sought that demonstrate the management of data through the multiple phases of the scientific data life cycle, from creation to re-use. Of particular importance are demonstrations of systems that process massive data collections.

Big Data
Big Data is not new to e-Science at all. The continuous innovation of the analysis model, resource federation, distributed infrastructure, dynamic data and computing, workflow management, and smart applications are all vital to the storage, management and analysis of Big Data. Characterized by Volume, Variety and Velocity, Big data is creating a new generation of scientific process and the discovery mechanism. In order to maximise the data values and to speed up novel discoveries, multidisciplinary approach, dynamic provisioning of storage resources and long-term data preservation are primary focuses for the moment. This track aims to attract novel research, application and technology on Big Data analysis. Submissions should focus on both the conceptual modelling as well as the techniques related to Big Data analytics.

Infrastructure & Operations Management
This session will cover the current state of the art and recent advances in the operation and management of large scale research infrastructures especially as related to enabling and encouraging their use by the research communities. The scope of this track will include advances in high-performance networking (including IPv4 to IPv6 transition), monitoring tools and metrics, service management (ITIL and SLAs), security, improving service and site reliability, interoperability between infrastructures, user and operational support procedures, and other topics relevant to provide a trustworthy, scalable and federated environment for general grid and cloud operations.

Infrastructure Clouds and Virtualisation
This track will focus on the use of Infrastructure-as-a-Service (IaaS) cloud computing and virtualization technologies in large-scale distributed computing environments in science and technology. We solicit papers describing underlying virtualization and "cloud" technology, scientific applications and case studies related to using such technology in large scale infrastructure as well as solutions overcoming challenges and leveraging opportunities in this setting. Of particular interest are results exploring usability of virtualization and infrastructure clouds from the perspective of scientific applications, the performance, reliability and fault-tolerance of solutions used, data management issues. Papers dealing with the cost, price, and cloud markets, with security and privacy, as well as portability and standards, are also most welcome.

Interoperability of federated ICT-infrastructures (with Grid, Cloud, HPC, HTC, HPTC, data or network resources) are mandatory to address the needs of modern e-science. Today, with simulation as the 3rd pillar and data-exploration as the 4th pillar besides theory and experiment, researchers from all scientific disciplines make intensive use of multiple resource types in various e-infrastructures on the regional, national and international level. To enable an easy and intuitive access, interoperability is of major importance. Interoperability includes a spectrum from rapidly-prototyped solutions to open-standards based interoperability of components interfering via standardized interfaces. Topics of interest include but are not limited to:

•Real-world production use cases of scientific applications using standards-based, interoperable ICT-infrastructures
•Methods, design principles and solutions for standards-based interoperability
•Operational security solutions enabling secure interoperability

Business Models & Sustainability
Understanding how a particular e-Infrastructure component can be created and sustained requires answering two pairs of questions: What resources are needed to create it, and how can those resources be assembled and applied? And What resources are needed to sustain it, and how can those resources be assembled and applied? Over the last decade considerable effort has been invested in creating e-Infrastructure components. Discussions are now focusing on the second aspect - sustainability - and the supporting business models describing the mechanisms employed by each e-Infrastructure component to create value (economic, social, etc.) for its consumers.

This track seeks contributions around business models and sustainability relating to e-Infrastructure components including:
• Business models around e-Infrastructures and their components
• Sustainability of e-Infrastructure components
• Initiatives to understand the cost of delivering e-Infrastructures components
• Planning strategies and methodologies around e-Infrastructure components
• Strategies for understanding the needs of those that use e-Infrastructures and how to grow the user base by evolving the offered services.

Highly Distributed Computing Systems
This track will highlight the latest research achievements in interoperability between commercial clouds, conventional grids, desktop grids and volunteer computing. The topics will cover new technologies of the related software frameworks, recent application developments, as well as infrastructure operation and user support techniques for all levels: campus, institutional, and for very large scale cyberscience computing.

Special focus will be on the following areas:
•Interoperability with other and integration in other -infrastructures
•Virtualization techniques
•Data management
•Energy efficiency and Green computing aspects
•Quality of service
•Novel uses of volunteer computing and Desktop Grid
•Best practices and (social) impacts
•Science gateways and other high level user interfaces

High Performance & Technical Computing (HPTC)
With the growing availability of computing resources such as public grids (e.g., EGI and OSG) and public/private clouds (e.g., Amazon EC2), it has becomes possible to develop and deploy applications that exploit as many computing resources as possible. However, it is quite challenging to effectively access, aggregate and manage all available resources that are usually under control by different resource providers. This session will solicit recent research and development achievements and best practices in exploiting the wide variety of computing resources available. HPTC resources include dedicated High Performance Computing (HPC), High Throughput Computing (HTC), GPUs and many-core systems.

The topics of interest include, but are not limited to the followings:
•Experiences, use cases and best practices on the development and operation of large-scale HPTC applications
•Delivery of and access to HPTC resources through grid and cloud computing (as a Service) models
•Integration and interoperability to support coordinated federated use of different HPTC e-infrastructures
•Use of virtualization techniques to support portability across different HPTC systems
•Robustness and reliability of HPTC applications and systems over a long-time scale