Tutorials |
||||||||||||||||||||||||||||||||||||
Details about the tutorials will be found below the time table. | ||||||||||||||||||||||||||||||||||||
Monday, June 10, 2013, Room announced locally
Friday, June 14, 2013
|
||||||||||||||||||||||||||||||||||||
This half-day tutorial will introduce the attendee to some of the issues of parallel programming for multicore systems. We will discuss various models used for creating and then managing efficiently large numbers of "picothreads." The tutorial will first cover the basic technique of "divide and conquer" as it applies to splitting up computations into large numbers of separate sub-computations. We will provide examples using Intel's Cilk+ language, as well as using Go, Rust, and ParaSail, three new parallel programming languages. The tutorial will then go on to investigate the "work-stealing" scheduling mechanism used by the Cilk+ run-time, Intel's Threaded Basic Blocks library, as well as the ParaSail virtual machine. Work-stealing is an efficient way to handle the large number of very small "picothreads" created in abundance by these parallel programming technologies. We will also discuss the issues of managing storage to provide safety and separation between concurrent tasks, including per-task heaps, unique pointers, and region-based storage management. We will include a short discussion of heterogeneous parallel programming, using auxiliary chips such as Graphics Processing Units (GPUs) as general purpose processors (GPGPU). Intended audience: Intermediate to Advanced knowledge of programming, with some understanding of multi-threaded/multi-tasking issues, including race conditions and synchronization. Reason for attending: Attendees will learn the various paradigms for creating algorithms that will take advantage of the growing number of multicore processors, while avoiding the overhead of excessive synchronization overhead. Attendees will also learn the theory and practice of "work stealing," a multicore scheduling approach which is being adopted in numerous multicore languages and frameworks, as well as the various tradeoffs associated with different multicore storage management approaches. |
||||||||||||||||||||||||||||||||||||
Most companies have developed coding standards (often because having one is a requirement for certification), but few have conducted a real analysis of the value, consistency, and efficiency of the coding standard. This tutorial presents the challenges of establishing a coding standard, not just for the sake of having one, but with the goal of actually improving the quality of software. This implies not only having "good" rules, but also having rules that are understood, accepted, and adhered to by the programming team. The issues of automatically checking the rules is also fundamental: experience shows that no manual checking can cover the programming rules to a satisfactory extent. The tutorial presents the tools available, and criteria for choosing such a tool Level: Intermediate Expected audience experience: No special requirement. Reasons for
attending Biography: JP Rosen is a professional teacher, teaching Ada (since 1979, it was preliminary Ada!), methods, and software engineering. He runs Adalog, a company specialized in providing training, consultancy, and services in all areas connected to the Ada language and software engineering. He is chairman of AFNOR's (French standardization body) Ada group, AFNOR's spokesperson at WG9, member of the Vulnerabilities group of WG9, and chairman of Ada-France. |
||||||||||||||||||||||||||||||||||||
Requirements form the basis for all modern system development. They establish the stakeholders' expectations for the system to be developed/delivered, and they evolve into the “as-built” specifications for the system after it is completed. Requirements fall into natural categories and must be managed consistently with their category, including defining a verification approach. During construction, requirements naturally evolve, but this evolution must be carefully controlled to avoid unexpected perturbations to the development plan. This tutorial discusses the technical basis of requirements, addresses shortcomings in current practices, and provides guidance for enhanced practices that address the historic shortcomings. |
||||||||||||||||||||||||||||||||||||
In spite of the high-level abstraction benefits of automatic tracing garbage collection, current prevailing sentiment within the safety certification community is that a simpler memory model is required for the most rigorous levels of software safety certification. Thus, the draft JSR-302 specification for safety critical Java relies on scope-based memory allocation rather than tracing garbage collection. For each thread, the associated scopes are organized as a stack of memory allocation regions. To eliminate the possibility of dangling pointers, objects residing in outer-nested scopes are never allowed to refer to objects residing in inner-nested scopes. The scoped memory model for JSR-302 is a simplification of the RTSJ model. JSR-302 enforces a strict hierarchy of scopes and distinguishes private scopes, which can be seen only by one thread, from mission scopes, which can be accessed by all the threads that comprise a mission, including threads running within inner-nested sub-missions. The hierarchical memory structure allows implementations to guarantee the absence of memory fragmentation for scope management, unlike the Real-Time Specification for Java from which the JSR-302 specification was derived. In the absence of block structure, it is more difficult in Java to safely manage references to scope-allocated objects than in Ada. Enforcing that outer-nested objects do not refer to inner-nested objects requires, in general, a run-time check at reference assignment time. The run-time check will throw a run-time exception if the assignment is deemed inappropriate. The safety certification evidence for a given safety-critical Java program must therefore include an argument for every reference assignment that it will not cause the program to abort with a run-time exception. Furthermore, the certification evidence must prove that sufficient memory is available to reliably execute each safety-critical task in the system. This tutorial provides an overview of dynamic memory management in Safety Critical Java and describes two annotation systems that have been designed to support static (compile-time) enforcement of memory safety properties. The first annotation system is described in an appendix to the draft JSR-302 standard. This relatively simple annotation system, which is not considered normative, serves to demonstrate that memory safety can be statically proven without requiring extensive annotations throughout existing library code. The second annotation system is the system implemented in Perc Pico. This annotation system, which is much richer than the draft JSR-302 annotation, has been in experimental use for over five years. During that time, tens of thousands of lines of experimental application code have been developed, with the experience motivating a variety of refinements to the original design. Both annotation approaches allow static verification to prove that illegal reference assignment exceptions will not be thrown at run time. |
||||||||||||||||||||||||||||||||||||
ASIS (Ada Semantic Interface Specification) is an ISO Standard (ISO/IEC 15291:1999) that defines an API for analysing Ada programs. In practice, an ASIS implementation is often (but not always) tied to a compiler. It can be seen as a way to browse the decorated abstract syntax tree of the program. Ada is a sophisticated language. Simple-minded tools that do not account for visibility rules, type and overloading resolution, etc. are unable to do any serious work. The benefit of using ASIS is that it frees the developer of an Ada tool from rewriting half of an Ada compiler. This tutorial is intended for those who want to write a tool that processes Ada code, or are just interested in learning how the various tools work that are based on ASIS. No knowledge of compilation techniques is required, the necessary elements are presented as part of the tutorial. Finally, the ASIS standard, which has not changed since Ada95, is currently undergoing an upgrade to Ada 2012. The tutorial concludes with the current evolution of the proposal for the upcoming standard. Level: Intermediate Expected audience experience: casual Ada experience. Reasons for
attending Biography: see Tutorial 2 |
||||||||||||||||||||||||||||||||||||
The practice of verification and validation (V&V) is a key and essential ingredient of any software development effort. While often thought of as being just testing, V&V actually consists of a variety of practices, including reviews, inspections, and audits. An effective selection and application of appropriate V&V practices can increase product quality and dependability as well as assist in meeting cost and schedule goals. In this tutorial, we examine the nature of V&V as applied to software and present techniques that have been shown effective. We also discuss their individual strengths and weaknesses, and provide advice on how to select the appropriate practices based on the nature of the system under development. |
||||||||||||||||||||||||||||||||||||
The tutorial introduces entity-life modeling (ELM). It is a design approach for reactive, multitask software, that is software that responds to events in the environment as they occur. It is not a multi-step method but rather a pattern-based extension of object orientation into the time dimension: The central idea is that the task architecture should reflect concurrency that exists in the problem. The tutorial follows the presenter’s book "Design of Multithreaded Software: The entity-life modeling approach" (IEEE Computer Society/Wiley 2011) but uses Ada terminology. ELM was originally developed with Ada tasking in mind but works with Real-time Java as well. The tutorial is illustrated with multiple Ada examples. Level: Intended for architects, designers, and programmers of real-time and interactive software as well as software-engineering academics and students interested in concurrency. If tasking is considered an “advanced” aspect of Ada, the level of the tutorial is advanced. It assumes general knowledge of tasking or threading. Reasons for attending: Understand and eventually learn the ELM way of designing reactive, multitask software. Bibliography: Dr. Bo Sandén began his career as a software developer in industry and had the opportunity to study and design multithreaded software. In 1986-87, he was a Visiting Associate Professor in the pioneering software-engineering program at the Wang Institute, Tyngsboro, MA. As an Associate Professor at George Mason University, Fairfax, VA,1987-1996, he helped create a master’s program in software systems engineering. Since 1996, he has been a Professor of Computer Science at Colorado Technical University in Colorado Springs, where he has taught at the undergraduate and master’s levels and now exclusively teaches and directs student research in the Doctor of Computer Science program. Dr. Sandén is the inventor of entity-life modeling and the author of “Design of multithreaded software: The entity-life modeling approach.” He gave this tutorial at Ada Europe 2012 in Stockholm, June 2012, and at the ACM conference on High Integrity Language, HILT’12, in Boston, December 2012. |
||||||||||||||||||||||||||||||||||||
How do you verify that your software really does what you think, all the time, in time? This tutorial will cover fundamentals of testing real-time software, focusing on issues that hit embedded and real-time systems such as software timing, performance, and structural code coverage on-target. We analyse the differences between on-target and on-host testing and understand the challenges of working in embedded systems. Different ways of getting access to an embedded computer are discussed, including the impact that measuring has on the software under test (the "probe effect"). We look specifically at timing issues, measuring and analysing worst case execution time and other performance metrics, and spend a little time understanding optimization issues. Structural code coverage measurements including MC/DC are explained as well as their benefit and relevance to reliable software testing. The relevant objectives of DO-178B and a new automotive standard ISO 26262 are discussed. Finally, we will cover other software verification issues that arise such as verifying complex constraints and sequences. This tutorial includes interactive sessions, and there is an element of practical work in Ada and other languages: Testing on-host and on-target; Problems of testing real-time software; Working on embedded targets; The probe effect; Timing issues; performance metrics; Worst case execution time - techniques, theory and practice; Optimization issues; Structural code coverage, MC/DC coverage; DO 178B/C and ISO 26262; Verifying sequences; and other constraints. About the Presenter: Dr Ian Broster is a founder and Director of Rapita Systems Ltd, a company specializing in on-target software verification. He is an experienced, lively lecturer who has given numerous training courses, lectures and presentations on this and other topics. His previous Ada Europe tutorials consistently received excellent feedback. He has been involved with Ada since 1995 and earned his PhD at the Real-Time Systems Research Group of University of York. |
||||||||||||||||||||||||||||||||||||
This tutorial explains how to implement a Service-Oriented Architecture (SOA) for reliable systems using an Enterprise Service Bus (ESB) and the Ada Web Server (AWS). The first part of the tutorial describes terms of Service-Oriented Architectures (SOA) including service, service registry, service provider, service consumer, Service Oriented Architecture Protocol (SOAP), REST, and Web Service Description Language (WSDL). This tutorial also presents principles of SOA including loose coupling, encapsulation, composability of web services, and statelessness of web services. The tutorial also covers the benefits of SOA and organizations that are supporting SOA infrastructure. The second part of the tutorial covers the Enterprise Service Bus (ESB) including definitions, capabilities, benefits and drawbacks. The tutoriall discusses the difference between SOA and an ESB, as well as some of the commercially available ESB solutions on the market. The Mule ESB is explored in more detail and several examples are given. In the third part, the tutorial covers the Ada Web Server (AWS) built using the Ada programming language. The tutorial covers the capabilities of AWS and explains how to build and install AWS, and how to include the server in an Ada application. The tutorial demonstrates how to build a call-back function in AWS and build a response to a SOAP message. Finally, the tutorial explains how to connect an AWS server to an ESB endpoint. AWS is a key component to building a SOA for a reliable system. This capability allows the developer to expose services in a high-integrity system using the Ada and SPARK programming languages. |
||||||||||||||||||||||||||||||||||||
The tutorial will summarize the main aspects of the Ravenscar profile, as well as some other basic real-time facilities available in Ada 2012. Programming patterns for analyzable real-time systems will be described, together with software development techniques for high-integrity systems. The use of GNAT GPL for the LEGO MINDSTORMS NXT toolchain will be described in the context of a comprehensive example. A LEGO MINDSTORMS NXT robot will be used as a platform for the use of cross-development and debugging tools. Level: Intermediate. The tutorial is aimed at project managers, systems engineers, and developers of critical software systems. Reasons for attending: Attendants will learn the main concepts and techniques needed to develop high-integrity real-time systems on a representative platform for robotic applications. A LEGO MINDSTORMS NXT will be used for a comprehensive example of software development using GNAT GPL for LEGO MINDSTORMS NXT. Presenters: Juan
Antonio de la Puente is a professor at Universidad Politécnica de Madrid (UPM).
He has been teaching Ada and Real-Time systems for more than 20 years. As the
head of the real-time systems group at UPM, he has led the development and
evolution of the Open Ravenscar real-time Kernel (ORK), and the work in UPM on
GNAT GPL for LEGO MINDSTORMS NXT, that includes the porting to Linux/GNU hosts
as well as integrating tools for developing real-time embedded software. |
||||||||||||||||||||||||||||||||||||
Ada is well known for its rich semantic for multicore and distributed systems. But do all Ada applications use those strengths at the right place? Use of Ada tasking and distributed semantics is powerful, but this can also introduce some issues concerning test strategies, dynamic architecture strategies and type constraints. The participant will learn how to easily maximize the use of multicore and distributed systems on their applications. Description of the topic: Learn how to use multicore and distributed systems in your application. For example test strategies (monotask profile), dynamic architecture strategies (e.g. alternating per components tasking strategy, i.e. 'vertical tasking strategy'; or per data-flow tasking strategy, i.e. 'horizontal tasking strategy'), type constraints for DSA (limited types ...). Outline of the presentation: An overview of multicore and distributed systems, Ada capacities and corresponding existing tools. The participant will be able to use a framework called Rachis in real-life project examples. Rachis will host user-components and will allow efficient use of multicore and distribution (DSA). Examples/Tutorials will include maximized multicore use on your application components, create a multicore distributed version of your application (without any code change), customizing multicore and distributed systems policies. Level of the tutorial: Introductory and Intermediate Recommended audience: software engineer Reasons for attending: This tutorial gives key knowledge and experience to software engineers willing to maximize the use of multicore and distributed systems for their applications. Presenter lecturing expertise: David Sauvage graduated from ESME Sudria (French engineering school) in 2004. He started as a software engineer at Thales, where he discovered Ada. Working on tactical data link product lines, he then became an agile software architect. In 2010, he formed AdaLabs Ltd (http://adalabs.com), a company specialized in Ada based technologies and services, located in the Republic of Mauritius. David built his expertise on multicore and distributed systems by enhancing multicore and distributed systems on existing industrial software product lines, and designing distributed test environments. |
||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
April 18, 2013 |