The Scarpaz.
The Scarpaz Site
:: Your unreliable online reference documentation on the Scarpaz. ::
:: Via malfidinda reta dokumento de referenco pri la Skarpac. ::
Counter image

Navigation: > [Home page] > [papers] > [TCM] [Alphabetical Index]     [Tree View]

Local contents:

 
07a_3.pdf File : 07a_3.pdf 99 kbytes 2004-10-08
 
01227162.pdf File : 01227162.pdf 670 kbytes 2004-08-30
Title: Managing Dynamic Concurrent Tasks in Embedded Real-Time Multimedia Systems
Authors: P. Yang, P. Marchal, C. Wong, S. Himpe, F. Catthoor, P. David, J. Vouncks, R. Lauwereins
 
01235677.pdf File : 01235677.pdf 337 kbytes 2004-07-30
Title: Task Concurrency Analysis And Exploration Of Visual Texture Decoder On A Heterogeneous Platform
Authors: Z. Ma, C. Wong, E. Delfose, J. Vounckx, F Catthoor, S. Himpe, G. Deconinck
Abstract:

The emerging of mobile multimedia terminals has given rise to growing demands for the power-efficient and scalable image transmission. Visual Texture Coding (VTC) has attracted increasing attention due to its scalability when transmitting still images. Nowadays, the implementation of such VTC decoders has not yet considered the need of energy perfonnance trade-offs at the system-level. We have applied systematic system-level design techniques to analyze the VTC decoder and explore its timing-energy trade-off space by using our coiicurreiit task scheduling exploration techniques. The approach presented in this paper allows a system designer to select the optimal heterogeneous platform configuration for a given speed of the VTC decoder while minimizing the global energy consumption.

 
01299188.pdf File : 01299188.pdf 1.38 Mbytes 2004-07-30
Title: High-Level Data-Access Analysis for Characterisation of (Sub)task-Level Parallelism in Java
Authors: R. Stahl, R. Pasko, F. Catthoor, R. Lauwereins, D. Verkest
Abstract:

In the era of future embedded systems the designer is confronted with multi-processor systems both for performance and energy reasons. Exploiting (sub)task-level parallelism is becoming crucial because the instruction-level parallelism alone is insufficient. The challenge is to build compiler tools that support the exploration of the task-level parallelism in the programs. To achieve this goal, we have designed an analysis framework to evaluate the potential parallelism from sequential objectoriented programs. Parallel-performance and data-access analysis are the crucial techniques for estimation of the transformation effects. We have implemented support for platform-independent data-access analysis and profiling of Java programs, which is an extension to our earlier parallel-performance analysis framework. The toolkit comprises automated design-time analysis for performance and data-access characterisation, program instrumentation, program-profiling support and post-processing analysis. We demonstrate the usability of our approach on a number of realistic Java applications.

 
aart97.pdf File : aart97.pdf 412 kbytes 2004-07-29
 
banerjee93.pdf File : banerjee93.pdf 648 kbytes 2004-07-29
Title: Automatic Program Parallelization
Authors: U. Banerjee, R. Eigenmann, A. Nicolau, D.A. Padua
 
Bibliography for Task-level Parallelism Extraction.txt File : Bibliography for Task-level Parallelism Extraction.txt 7224 bytes 2004-07-29
 
chen03jrpm.pdf File : chen03jrpm.pdf 457 kbytes 2004-07-29
 
chen03test.pdf File : chen03test.pdf 573 kbytes 2004-07-29
 
chen98.pdf File : chen98.pdf 85.4 kbytes 2004-07-29
 
girkar92.pdf File : girkar92.pdf 1.07 Mbytes 2004-07-29
 
gross94task.pdf File : gross94task.pdf 103 kbytes 2004-07-29
 
gupta02coordinated.pdf File : gupta02coordinated.pdf 409 kbytes 2004-07-29
 
gupta03spark.pdf File : gupta03spark.pdf 136 kbytes 2004-07-29
 
gupta99.pdf File : gupta99.pdf 329 kbytes 2004-07-29
 
gupta99automatic.pdf File : gupta99automatic.pdf 2.91 Mbytes 2004-07-29
Title: Automatic Parallelization of Recursive Procedures
Authors: M. Gupta, S. Mukhopadhyay, N. Sinha
Abstract:

Parallelizing compilers have traditionally focussed mainly on parallelizing loops. This paper presents a new framework for automatically parallelizing recursive procedures that typically appear in divide-and-conquer algorithms. We present compile-time analysis to detect the independence of multiple recursive calls in a procedure. This allows exploitation of a scalable form of nested parallelism, where each parallel task can further spawn off parallel work in subsequent recursive calls. We describe a run-time system which efficiently supports this kind of nested parallelism without unnecessarily blocking tasks. We have implemented this framework in a parallelizing compiler, which is able to automatically parallelize programs like quicksort and mergesort, written in C. For cases where even the advanced symbolic analysis and array section analysis we describe are not able to prove the independence of procedure calls, we propose novel techniques for speculative run-time parallelization, which are more efficient and powerful in this context than analogous techniques proposed previously for speculatively parallelizing loops. Our experimental results on an IBM G30 SMP machine show good speedups Parallelizing compilers have traditionally focussed mainly on parallelizing loops. This paper presents a new framework for automatically parallelizing recursive procedures that typically appear in divide-and-conquer algorithms. We present compile-time analysis to detect the independence of multiple recursive calls in a procedure. This allows exploitation of a scalable form of nested parallelism, where each parallel task can further spawn off parallel work in subsequent recursive calls. We describe a run-time system which efficiently supports this kind of nested parallelism without unnecessarily blocking tasks. We have implemented this framework in a parallelizing compiler, which is able to automatically parallelize programs like quicksort and mergesort, written in C. For cases where even the advanced symbolic analysis and array section analysis we describe are not able to prove the independence of procedure calls, we propose novel techniques for speculative run-time parallelization, which are more efficient and powerful in this context than analogous techniques proposed previously for speculatively parallelizing loops. Our experimental results on an IBM G30 SMP machine show good speedups obtained by following our approach.

 
interleaving.pdf File : interleaving.pdf 264 kbytes 2004-07-15
 
jpt99.pdf File : jpt99.pdf 178 kbytes 2004-07-29
Title: JPT: A Java Parallelization Tool
Authors: K. Beyls, E. D'Hollander, Y. Yu
 
kandemir.pdf File : kandemir.pdf 74.5 kbytes 2004-07-29
 
paradigm95.ps File : paradigm95.ps 724 kbytes 2004-07-29
 
Peng PhD Thesis.pdf File : Peng PhD Thesis.pdf 1.90 Mbytes 2004-09-23
 
promis99.pdf File : promis99.pdf 1.13 Mbytes 2004-07-29
 
ptask96.pdf File : ptask96.pdf 78.0 kbytes 2004-07-29
 
ramaswamy94.pdf File : ramaswamy94.pdf 375 kbytes 2004-07-29
 
stahl.pdf File : stahl.pdf 346 kbytes 2004-07-30
 
tgff-codes.pdf File : tgff-codes.pdf 180 kbytes 2004-09-28
Title: TGFF: Task Graphs for Free
Authors: R.P. Dick, D.L. Rhodes, W. Wolf
 
TL3.3.pdf File : TL3.3.pdf 129 kbytes 2004-01-27
Title: Task Concurrency Analysis And Exploration Of Visual Texture Decoder On A Heterogeneous Platform
Authors: Z. Ma, C. Wong, E. Delfose, J. Vounckx, F. Catthoor
Abstract:

The emerging of mobile multimedia terminals has given rise to growing demands for the power-efficient and scalable image transmission. Visual Texture Coding (VTC) has attracted increasing attention due to its scalability when transmitting still images. Nowadays, the implementation of such VTC decoders has not yet considered the need of energy performance trade-offs at the system-level. We have applied systematic system-level design techniques to analyze the VTC decoder and explore its timing-energy trade-off space by using our concurrent task scheduling exploration techniques. The approach presented in this paper allows a system designer to select the optimal heterogeneous platform configuration for a given speed of the VTC decoder while minimizing the global energy consumption.

 
zjava01.pdf File : zjava01.pdf 98 kbytes 2004-07-29
Title: Run-Time Support for the Automatic Parallelization of Java Programs
Authors: B. Chan, T. S. Abdelrahman
Abstract:

The zJava project aims to develop automatic parallelization technology for programs that use pointer-based dynamic data structures, written in Java. The system exploits parallelism among methods by creating an asynchronous thread of execution for each method invocation in a program. At compile-time, methods are analyzed to determine the data they access, parameterized by their context. A description of these data accesses is transmitted to a run-time system during program execution. The run-time system utilizes this description to determine when an invoked method may execute as an independent thread. The goal of this paper is to describe this run-time component of the zJava system and to report initial experimental results. In particular, the paper describes how the results of compile-time analysis are used at run-time to detect and enforce dependences among threads. Experimental results on a 4-processor Sun multiprocessor indicate that linear speedup may be obtained on sample applications and hence, validate our approach.

Navigation: > [Home page] > [papers] > [TCM] [Alphabetical Index]     [Tree View]

MajaMaja is powered by the Tcl Language. This page was last updated on 2004-12-26 at 18:22:51.
This site was automagically generated by MajaMaja, a simple and easy to use web content manager written in Tcl by scarpaz <scarpaz@scarpaz.com>.